 Okay, so let's get started. Hello everyone and welcome to our webinar already number 40 of this series of Latin American webinars on physics. I'm Nicolas Bernal from the University of Antonio Nariño in Bogota and I will be your host today. So we're super happy because today we're starting our fifth season already. And also because we have a Marco Nardecchia today from CERN who will talk about the latest result from LHCB and in particular this flavor, all this flavor business. So Marco received his PhD from CISA and after a couple of postdocs first seen in Denmark and then in Cambridge, he moved to CERN where he's working as a postdoc. So Marco's talk today is flavor anomalies and a leptome flavor universality. And we're super glad to have him here as our speaker today. So please let me remind you guys that you can be part of the discussion, writing question and comments via the YouTube live chat system. So please don't hesitate to ask Marco as many questions as you want. So now I will hand you over to Marco. Are you there? Yes, let me share the screen. Sure, OK. OK, can you see my starting slide, right? Yes, sure. OK, good. So thank you very much. It's a real pleasure to be here. Can you hear me? Yes, it works. OK, good. It's a real pleasure to be here to talk about this topic which is quite a hot topic for our community, the Flavor Physics and BSM, Beyond Standard Model community. So I will start with a brief introduction and then I will discuss about some flavor anomalies. I'm going to talk about a series of anomalies, not only the very recent one which has been announced last week, because I think that the importance of this last measurement has to be seen in the context of all the flavor anomaly. So let me briefly start with something which may be trivial, but I want to talk about the indirect search for new physics. So we know that the Standard Model is very successful in describing physics up to the electric scale. We also know that there is new physics. We know that there is new physics because there are some experimental evidence that there is new physics like neutrino masses, dark matter, and so on. But unfortunately, we didn't find any effect in indirect searches. Namely, we were not able to produce on a shell something associated with the new physics, at least till now. So there is a very interesting option to study indirect effect. How to do that? For example, one approach is to treat the Standard Model as an effective field theory. So the Standard Model is just the leading term of a non-enormalizable Lagrangian, which is this one here. And this non-enormalizable term are constructed out to Standard Model fields, which are weighted by some coefficient, and a scale, the new physics scale lambda to a given power, which is fixed by the dimensionality of the operator. Now, which is the strategy for indirect probes? The idea is the following. You consider a process which contains just a Standard Model particle, and then you know that on top of the Standard Model contribution, you may get some contribution also from all this power of a non-enormalizable operation. So your full amplitude A is given by a Standard Model piece plus a correction in new physics. Now it's very important to measure this quantity precisely, one observable, for example, a decay of a particle line into some finite state of day. And also it's very important to have a good knowledge of the Standard Model prediction, because if you know this part here, the measurement, and if you want the Standard Model, then you can extract the information of the new physics. Information that can be made in two parts, let's say. If you are very unlucky, you get all the bounds on this combination. But if you are lucky and you start to see some effect, then you can really start to understand which is the size of this combination, coefficient times the new physics scale squared. So something that is common in this effective filter approach is that if you see an effect, you are not able to disentangle the two, but you get an overall information. Then if you want to progress, if you want to know which is the scale of the new physics, then you have to add more model building information to the game. But I will talk more about this aspect later on when I will speak about the anomalies. So if this is the strategy, so which are good observable. But good observable are those one in which the Standard Model is very, very small or even completely zero. For example, if you take a process like proton decay, then you know that the Standard Model, you have a clean prediction that the Standard Model gives zero contribution. So it's enough to see few events of this form, like proton decay into pi 0 and positron. And then you are convinced that there is new physics. If there is not such a suppression, let's say from a symmetry like the bionumber, but if, for example, you move to flavor physics, a nice place to look for new physics effect are in process of this form, a process where a quark down of a flavor i is transformed into another quark down with a flavor j, with a flavor transition. Usually a big quark into an s quark we will be interested in this process here. And then you get some lepton in the final state. So this is a nice place to look because in the Standard Model, you get a suppression, a suppression which is given by the fact that there is a loop factor. So this process is a flavor change in neutral current process. And in the Standard Model, it's generated always radiatively. And then you get CKM and the suppression also from the loop function. So in general, it's a nice place to look at for new physics because in the Standard Model is quite suppressed. So maybe you may have hope to see something in this channel. Before moving to the actual anomaly which has been observed recently, let me just comment on the last aspect which concerned the Standard Model, which is the flavor of the leptons in the Standard Model. So the leptons in the Standard Model appear in two points in the Lagrangian, in the gauge sector of the Lagrangian and in the Yuccava sector. Let us focus on the gauge sector. So in the gauge sector, we get this gauge kinetic thermal for left-handed lepton doublets and for right-handed doublets. Then you see there is a covariant derivative. Yeah, there is an i. Also this one is an i. Sorry, it's a typo, which is the flavor index. And the same is true for the other term. So the level of this part of the Lagrangian, there is no way to distinguish among electron, muon, and tau. You can also see in this at the level of global symmetry. So this part of the Lagrangian is a huge global symmetry, which is a U3 for the left-handed leptons and U3 for the right-handed leptons. And the message is that all the gauge interaction contained here in the covariant derivative are flavor universal. For example, the Z couples to electron muon with the same strength, with the gauge coupling, of course. But then it's universal in the lepton space. So we say that gauge interaction are lepton-flavor universalities. So you find very frequently these acronyms in papers discussing the anomaly LFU, means lepton-flavor universal. Now, if you switch on the Yuccava coupling, then we start to have a source of breaking of this huge symmetry. So the Yuccavas start to distinguish electron, muon, and tau. They distinguish in two ways. At the level of the masses, so you indeed, after the Higgs gets a wave, you generate a mass. And then, of course, the mass of the electron, the mass of the muon, the mass of the tau are different. And then also, you can differentiate them considering the Higgs interaction, because the strength of the Higgs interaction is different for the different leptons. However, for all the flavor observable, the Higgs interaction are really irrelevant. So what is left, what deviates from universality, is just an effect due to the different masses. And in some cases, for example, the mass of the electron, the mass of the muon, sometimes are very small to the mass in the process. So in reality, they are massless. So you really see lepton-flavor universality in the rate. So this is the prediction from the standard model. So it's very important to look to observable that can test the lepton-flavor universality, lepton-flavor universality of the Higgs interaction, in a sense, which is precisely this one. So this is very important, because if you start to see some deviation of the behavior of electron and muon, it means that there should be another mediator which starts to distinguish among them. OK, after this brief introduction to try to tell you where to look, now that I show you the anomaly, I really move to the anomaly. So this is the list of flavor anomalies. So I'm taking this nice plot from Zotta Negedi from Morion QCD. And the red is just my update after one year, because something is changed, like the measurement of RK star. So on this axis here, you find the significance of the effect, how anomalous it is according to the majority of paper, which is quoted at the level of standard deviation compared to the standard muon. On the other axis, there is a quantity which is completely arbitrary, which is the theoretical cleanness of the process. This is because in flavor physics, every time you start to see something anomalous, the first aspect is to rethink about the adrenic uncertainty. So sometimes there is a fight and discussion among others on which is the right sides of the adrenic error to quote. And then, of course, this is really more dependent. And in my opinion, if you ask which is the true significance of the anomaly, it's not matter of simply quoting sigma, but it's matter of having an understanding of which is the global status of the flavor physics. So in this axis, you move from low means dirty observable, and you go up to very clean one. So the first message I want to give is that to me, still, we don't have a super clear evidence of discovery for new physics. But there are some channels that are very, very clean. This channel are ratio that test the universality in the electron sector in the sense I was discussing before. For example, this decay B2K plus or minus over B2K plus or minus, this ratio is called RK. You see, I'm taking a ratio of electron over muons. The rest is keep and fixed. So it's quite clean observable. Similar is true for a orchestral. But you see that singularly, they are not super significant in the sense that this RK is about to sigma. And the RK star, as we are going to see, is about to sigma and a half in two different bits. But in any case, they deviates from the standard model. So it's very interesting. Another clean observable is this one in the top, X to tau mu. It's very clean because in the standard model, it's like the proton decay. You have a clean prediction. The prediction is zero because the X is flavor universe. So this one has been seen by CMS and others at run one. At run two, doesn't seem to still be there. But still, the final answer will come with more data. Then going down, there is another very interesting process, which is B to D star tau nu. You start to see that in the final state, they are always left on up here. So maybe there is an int that something is not really flavor universal. And then going down, there are angular observable. Again, B to K star mu plus mu minus. There are branchy ratio of B to S phi mu plus mu minus. And then there is a set of observable which start to be more dirty, like epsilon, private epsilon, which has to do with chem physics. So the more you go down in energy, the more you are close to the scale of QCD, the more complicated start to be your theoretical prediction. And then there is a long standing G minus 2. So this is more or less the situation. So as I was saying, some channels are very clean and the main limiting factor is statistics. For some channel, instead, the theoretical uncertainty are really playing the dominant part. So to be convinced that there is new physics in one of these place, what you need is really correlation, correlation among different observable. So what I'm going now to focus is I would like to take another thing about this table. And then you see that there are a lot of B physics transition, B to K mu mu, B to K E, B to K mu plus mu minus. There are a lot of mu ones. So you see that maybe we can cook up a nice story and there is a way to correlate very nicely the anomaly in transition involving the big work. And now we'll focus only in transition involving the big works. There are two sets, in my opinion, which are distinct and are very, very interesting. So the first set of anomalous measurements are coming from flavor changing charged current. So these are a transition that in the standard model are associated with flavor changing, but also a change of electric charge. So it's a big work that goes to a C quark. And then you look at the semi-leptonic final state. And then you see the important part of the story is that these one are generated at the tree level. While the other class of anomaly, which I will focus more, will be the flavor changing neutral current. So there is a big quark that goes to an S quark. And then you get, again, leptons in the final state. And then you see that the main difference compared to the first class of anomalies that I'm going to flesh in a second is the fact that this second class is generated at the loop level and are suppressed by the loop and also by some CKM elements. So I will first show very briefly what's going on in the charged current. And then I will focus mostly on the second part. So in the charged current, the transition is just to remind you, it's a big quark that goes a C quark. And then you look to leptons in the final state. There are two main observable, which has these two ratio, Rx, where x is a D meson. A D meson is a meson that contains a charm. I can have a spin 0. So it's called D or a spin 1. And it's called D star. So the observable is the ratio of a B decay to D or D star tau nu. And then you compare the rate to the B decay to the same final state. But now you replace the tau with another lepton, one of the light lepton, muon or electron. So you see already in the way the observable is cooked up, what we want to test is the universality in the lepton sector. OK, the measurements are reported in this plane. Rd versus Rd star. The story is very long. It started roughly five years ago with a measurement by Babar, which is this circle here, the black one. And then after Bell and those LHCB start to put their measurement in this plane. And then the combination in this plane, Rd, Rd star, is the red circle here. And then you see that the theoretical prediction sits here. It's this small purple ring here. If you want to have a feeling of how different are these two, the combined significance is something about 4 sigma. And the standard model prediction seems to be quite under control because, in this case, charger current are easier. And there are a latter result for the D. While for the D star, there are very good results from the limit of heavy quark symmetry that can be trusted very well in this sector. So the situation is very nice. What scales me from the point of view of beyond standard model physics is the sides. The sides, in this case, has to be very big. Has to be very big because in the standard model, we get an effect which is just given by the Fermi constant times the relevant CKM element, which is telling you that the overall scale of these processes of something of order 1 over 1 TV. So your new physics has to compete at the level of 30%, 40% of the standard model if you want to match the central value here. So it means that the effect has to be huge. So it means that you want light particle at the order of TV or less and also large cabling. So it's already you start doing a position which is a bit difficult from the model building. I'm not saying that it's not possible, but I really see it as really challenging from the model building point of view. You have really to stretch your model in the corner of the parameter space. So it's really something which we should keep in mind and try to think. Let me stress that this is a signal that maybe if you want to think about new physics, that the tau lepton is behaving slightly different than muon and electron. OK, I now close with this first class of anomalies. And now I really go to a different set of anomalies, which is where the situation is a bit different. In the previous case, we got one or two observable, which has been seen consistently by a different experiment. But here is the opposite. There are different observable, which have been seen mostly by the LACP collaboration. But the point I want to make is that you can correlate easily, if you assume for a second, if you want to dream about new physics, that all these anomaly can be explained in a simple and compact explanation at the level of effective field theory by new physics in a specific operator. So now I'm listing you which are the interesting observable. In this case, and then I'm going through all of them, just to tell you which is the status. So first of all, there is a tension in some angular observable of a specific decay, B2K star mu plus mu minus, in particular, the angular observables. The second set of interesting anomalies are in the branch ratio. Now you are integrating over the distribution, but still you see some anomalies in some channel, like this one, oh, sorry, B2S5 mu plus mu minus. And then there is the third class, which is really the most interesting one to me, is the hint of Latin flag universality in two observable, which are very clean, which are RK and the famous RK star, which has been presented last week. I will also flesh about this measurement, which is in Goodson and the model agreement, but it's very important for the model building, BS mu plus mu minus. OK, let's start this journey through the various anomalies. So let me start from B2K star mu plus mu minus. So the B, Mason decay into a K star, which in reality then promptly decay into a kaon and a pion. So it means that in reality, this is not a three-body decay, but we are talking about a four-body decay, because then we get a kaon and a pion on top of the muons in the final state. So out of this decay, you can start to study the full differential distribution of the decay rate. And then you can have a differential distribution in terms of the disangular observable, like the angle between leptons, the angle between the kaon and the pion, and so on. And there is a very important observable, which is called Q squared, is the environment mass of the mu plus mu minus in the final state. So out of one single measurement, one single decay, you can extract something of the order of 20 interesting observables. And then in 2013, which is now four years, roughly from now, we found a discrepancy in one of these observable, which is called P5 prime. So P5 prime is an observable, which is defined with a specific integration on this angle. And then what you're left, you remain differential in this Q squared. And Q squared is reported in the plot here. And then you see that parpol is the standard model prediction according to this group here, while this black point here are the experimental results. And then you see in one of these bin here, you see a discrepancy, which is quite evident already by eye in this region here. Before commenting about the origin of that, let me comment that here there is a black box here, a white box here, where there are no results. In reality, there are results here. But in this region, the mu plus mu minus are supposed to come not from the continuum, let's say, but are coming from the J psi. So the fundamental process is B2K star J psi. And then the J psi decay to mu 1. And then you know that you have a strong decay of a K star and a J psi in the final state. You already know that theoretical prediction inside this band here is very complicated. So once you see a plot like that, then you can start to think about which is the origin of this discrepancy, which has been called to be 3.7 sigma in this bin here. So the possible explanation, I think we can start with the easiest one, which is just a statistical fluctuation of the data. In reality, it's just 3.7 sigma. We know we have observed also statistical fluctuation even more severe than this one. So it's an option which is on the market. The second possibility is that it's the adrenic uncertainty. So adrenic uncertainty, because here what I reported is just the quoted adrenic uncertainty from this group here from Barcelona mainly. But if you ask to another group like this one from Rome, you see that no deviation is present once all the theoretical uncertainty are taken into account. So you see that there are really very different approach to this observable. Most likely this one are a bit conservative, my opinion, while this one maybe are slightly more optimistic. But just to give you the feeling that the discussion is really ongoing on this observable. Then, of course, there is another option to explain the data, which is the most beautiful one. But we still don't know, which is, of course, that this discrepancy here maybe originated by the new physics. So this is really the dream. What happened after the measurement? It happened that in 2015, more on the last week, collaboration reported the data again based on run one, but with much more statistics. What they did is to show the result splitting the bin, which was the most anomalous one in two little bin to see that the effect was there also in the low part of the bin. And then you see still a discrepancy. Again, discrepancy when you treat the standard model uncertainty in the prediction using this specific standard model prediction. And then you see it's interesting that also the anomaly is there also in the low part of the bin, you are a bit afraid from this one because maybe you are close to the germanium resonance, which starts in this region. So in 2017, another update from Morione is interesting that also other NCMS start to say their words on the topic. Then we get this blue is Adelas and green is CMS, that they are going in different direction. Still, I think Adelas and CMS are not experiment dedicated to this kind of searches, but this is just the start. Maybe they can improve a lot in the future. But in this plot, maybe it's more interesting to see that Bell has another point in this interesting region, which is still not in agreement according to this specific standard model prediction. OK, so this is the situation up to Morione, so up to one month ago. We can now move to another set of observable. I was telling you that there are different observable that start to be in tension with the standard model. And there are various measurement of branchy ratio, which are low compared to the standard model prediction. You have to look now to the last four rows of this table here. Now we are integrating also over also this extra angular distribution. We are left with a branchy ratio in q square only. And you can also integrate on a region of q square. You see that the standard model prediction in various channels, k star, k bar, k bar, and also v s phi, the standard model prediction systematically above to the standard model measurement. Again, this seems to go in the same direction of the previous anomaly that you start to see something in Muon. Also, let me notice that just one observable alone, which is this v s phi mu plus mu mu minus, if you treat the error in a given way. And if you ask about the pool, the discrepancy with respect to the standard model, you already see an effect which is start to be quite interesting at the 3.5 sigma level. Again, here we are putting some theoretical input to estimate the electronic uncertainty, which can be the origin of this. Again, it's a statistical fluctuation. It can be a statistical fluctuation. Now it looks to me more difficult because you start to have statistical fluctuation in various channels. There is always, it looks more that this is a systematic effect. Systematic, which can have a double phase, can have the phase of adrenic uncertainty that maybe the adrenic uncertainty is correlated among the different observable, or can be a systematic effect in the sense of new physics that really affects all the decay channels. But again, this is one of such observable which the theoretical uncertainty is debated. But now we really go to something which is start to be really super interesting and is giving boost to the whole story, which is the measurement of this RK and Arquesta. So RK has been measured now three years ago, is defining the following way. It is an integral over some q squared region. q squared is the variable I defined before of b2k mu plus mu minus over b2k e plus c minus. So now the energy into the game is the mass of the bottom, the mass of the mu and the mass of the electron are very small compared to the mass of the bottom. So this is really a clean test of leptom-flavoured universality. In the standard model, the prediction is one with a small correction, which is due to the fact that the emission of Bramstrang photon is slightly different in electron versus muon. But there is consensus in say that this effect is of the order of percent. So the message is that this observable is very clean. We expect to see one, because in the standard model, gauge interaction are leptom-flavoured universality. Instead of observing one, what we saw is 0.75 plus minus the error, which is dominated by statistics at the moment, let's say, plus minus 0.1, which is, I would say, a genuine 2.6 effect. And now for the explanation, now I cannot invoke any more the adrenic uncertainty, because there is a very nice cancellation of all the adrenic effect in the numerator and the numerator. Can still be a statistical fluctuation in this channel that is a fair option, or again, can be due to the new physics? Of course, when you start to see effect of this kind, then you can also start to question about the analysis and so on. Maybe you think that there is an issue with the reconstruction of muon and electron. I think the experimental collaboration is really on this topic. They really want to check that there are no issue. I tell you which is the strategy for RK, at least my understanding. The strategy is that if you are afraid that you are miscounting electron and muons, you know that in the process, if you reach the q-square region where you really produce the j-psi, then you know that the j-psi, if you have the j-psi in the electron and muon in the standard model, you get 6% for both of them. So it's very, very clean. So you can use your j-psi to calibrate how many electron and muons you have. So every time these RK and those RK-STAR are always referred to a control channel, which is the j-psi. But in any case, it's something which the experimental collaboration is really sensitive. They're continuing checking if their analysis is correct. And now I really move to previous week, which is the measurement of RK-STAR. RK-STAR is precisely defined as before. The only replacement we have to do is to replace the k with a K-STAR in the numerator where there are muons and in the numerator where there are electrons. And the experimental collaboration this time provides two data, one in the so-called central bin between 1 and 6 CHEF square, which is overlaps with the Q-square region of RK. And then there is also an interesting low Q-square, even lower Q-square region, at very low energy. So very brief comment. So again, the main message is that it's clean in the same sense of RK. So we expect something, one in the standard model, in the low Q-square, slightly less than 1, but practically 1. The main difference is that now K-STAR has a spin 1. It's a spin 1 particle as tripolarization. Lorentz's structure is quite different. But again, in the standard model, we predict 1. And then this is the same channel, K-STAR mu plus mu minus, where we are observing these discrepancies in the angular observable. So this is the scenario. Standard model predicts 1. And what we found is the following. At low Q-square bin, we see 0.66, with an error of the order 10% slightly above. And also in the central Q-square region, we observe 0.67. Instead of measuring 1, they measure a deficit, which can be quantified of the level of 2.4 sigma in each bin. Which, again, if you take this as a single measurement, I wouldn't say it's super exciting, because it's just a measurement. But if you start to put everything together, the story starts to be very, very interesting. And finally, there is another observable which deserves attention in general, is B-S mu plus mu minus. This has been considered one of the golden channel for the LHCB, because in the standard model, it's sensitive to some kind of structure. And then a various model predicts huge effect in this channel. So when you look for new physics in the anomalies, this is usually a good constraint, and tell you that you cannot do whatever you want. Now there is a good agreement between standard model and the experiment. There is a slight mismatch of the level of 1 sigma. But I'm putting there just to tell you that this plays a role when you want to think about new physics. So to summarize this part, we see that there is a hint of new physics effect in B-S mu plus mu minus in various channels. And now let me move to the new physics part. So the new physics part, the first thing you want to do is not to show a model, but before you want to understand which is the model independent interpretation of this data. So the strategy is to write an effective Hamiltonian at the scale of the process. And then you consider the operator that are relevant that can give you a good effect, including to explain this anomaly. And I will really flesh, I don't want to enter the detail, but you will hear a lot in the literate of this name, O9, O10, C9, C10. It's just the name of some specific for Fermi operators which contain a big quark, an S quark, with some kerality. And then you get a lepton current. Now usually in all the fit you find, that this lepton current is associated with the muon because it's in the muon that we are seeing some effect. Of course, when you want to do a fit, you have to include your standard model prediction. And then you have to estimate the adrenic uncertainty. I'm not entering the discussion, but I want just to give you a flavor of which are the main adrenic uncertainty. So the main adrenic uncertainty are the form factors. Namely, you have to evaluate a local operator of quarks between two mesons. And for this one, we have some tools which are at high q squared. You can use lattice result at low q squared. You can use other theoretical tools. But it's even more, and still there are some uncertainty in how to get the dependence on the full q squared range on this matrix element. But that is even more complicated that sometimes you are forced to consider non-local operators. And then you want to estimate which is the matrix element between this non-local operator, which is very, very tough. And all the art and all the discussion is about how to estimate the effect, how big are in the standard model. And in some cases, you can use some framework like the heavy quark expansion and so on. But there is always a big discussion on to try to understand which is the overall effect. Once you have your own preferred way of treating the other one, then you can move and see which is the fit. Now I'm presenting the result before RK star. And in my last part of the talk, in the last five minutes, I will talk about what happened after the measure of RK star. And then I will show you very briefly a couple of message for BSM, beyond standard model. So this has been done by this group, David and Volkan, soon after Morion, but before the RK star measurement. And these are the the hypothesis of switching on new physics in one single Wilson coefficient. So one specific for Fermi operator in this basis here. And then you see the best hypothesis is in this coefficient, which is called C9. You get a best fit value, which is telling you that is of order 30% of the standard model, something like that. And in this fit, everything has been included. Dirty observable and also clean observable. So you reach a pool already before K star, which is close to five sigma. Let me stress again that here we are including also the non-clean observable with some treatment of the error. And then you see that there are various hypothesis. This C9 is the best. C9 means that the current for muons is a vectorial current. And I think that is very interesting to ask, which is among the chiral operator. Operator with definite chirality, which is the best. And the best is the one with the, is an operator which has left chirality for quarks and left chirality for leptons. Because in this, studying the fit in this basis is very useful if you want to cook up a BS, a beyond standard model theory, because in that case, usually you expect models to preserve chirality if you have, for example, one single mediator. So it's very interesting that we are light one specific direction. And so what I want to remark is that these various different observable, they all collapse in a single interpretation in one single Wilson coefficient, which is consistent. So just switching on something there, we found that everything is consistent. We get a good fit. So all these observable are going coerently in this direction. Okay, what I have to say after Arquistar, I have to say that there was great excitement and the day after already six papers appear on the archive, which is also good because from six different group, I think I had a look, I know about my work, but I also looked to the others. It's interesting that the conclusion is more or less the same. And in my opinion, the most interesting message coming out from this measurement of Arquistar is that now we can have conclusion. We can start to really think about this problem using only the clean observable. So if you just use Arquistar and Arquistar, you can draw a conclusion which are very, very strong because what I showed you before, the fit I showed you before was including everything, clean observable, not only the ratio Arquistar, but also all the other one. So if you just look at this clean observable, Arquistar, you don't even need a fit to see that there is a discrepancy. So in this plot, what we did in this one on the left, we are assuming new physics in the muon sector only. And you see in this axis, there is Arquistar. This is the measurement is this point here. And then you see how far is the standard model. And now you see that all these colored lines correspond to new physics in one single Wilson coefficient. And it's quite remarkable with just this couple of measurements you can highlight that there are one kind of operator which is this one with the left current. Or if you want the one with the vectorial current which is good. And again, what I want to stress that is in this plot, I'm just drawing the result without using this extra observable, the angular observable, which if you take into account, they are going in the same direction if you treat the error in the way according to some models. Let me mention that this is when you have new physics in muon, you always have destructive interference with the standard model because you want a reduction of this ratio Arquist less than one and the muon observable is in the numerator. So you want something less than one. The only way is to have destructive interference. So the only possibility is that you have an overlap with the amplitude of the standard model in order to have destructive interference. There is an open possibility also that there is new physics in electron, but this is true only if you consider the clean observable arcane and arcastar because arcane and arcastar are just ratio. So you don't know if the new physics is in the numerator or in the denominator. It's an option which is open, but if you want to follow this extra int in the other channel, then the easiest assumption you can do is that new physics in the muon sector only. So the division of the standard model which can be quoted by various group is that using only the observable which are very clean or only with that you reach for sigma. So really the story starts to be quite serious. Okay, there is an issue with the locus square bin, but I don't have time. If you are interested, you are more than welcome to ask me. And then finally, I conclude with two minutes about the new physics. Okay, so as soon as you get these results from the model independent interpretation, you know which are the right operator. You know which are the right chirality, the right law and structure that give you a new fit. And for example, one of the best option is to take left current for quarks and left current for muon. And then you already also know from the fit, you know the overall effect. The overall effect is something, if you normalize the operator with a scale, the scale you need is one over 30 TV, which notice is much heavier than the scale you need for the first class observable I showed you before. And the only thing you know, looking at indirect probes, this was one of the first message I want to give you, is that you only know the product of combination of cablings and masses. You don't know precisely the mass of the mediator. And for what concern the mediator, there are only two options which are on the market. You can exchange at the three level at Z prime. And the quantum number, if you want a left counter everywhere, is just a single towards you two or a triplet towards you two. And instead, if you want another option is to exchange in the T channel a left to quark. So a particle that couples to quark and left on here. And then you have two options for the left to quark can be either a spin one particle or a spin zero particle. For a spin zero particle, you need a triplet under SU three, a triplet under SU two and I per chart one third. For the vector, you are two option. This one which is called U three and this one which is called U one. Again, what you know from this simplified models approach, you just need to switch on two cablings, the cabling that you need to do the effect and the mass scale. So you see that you can reproduce the effect with very heavy new physics and large cabling or you can go down up to the direct search for the particle and then you can have a smaller cabling. So you see at this level, you cannot say where is the scale of the new physics but then you need more model building input. And then you see that maybe if you are a theorist, you are not very happy to switching on just to a doc cabling. So you would like to work more and construct out of the simplified model. You want to add another layer in your theory, construction and you would like to have a model maybe which has a flavor structure which is more divided and so on. I don't have time to discuss about that but there are uptempt in this direction. Finally, the last option at the level of simplified model is that the sturdy TV is large enough that you can also try to do to explain the normal invoking new physics at the loop level. Here you get the new fermions, new scalars in the loop. You can connect BS more plus more minus so you can explain your effect. You have to fight with constraints. Also in the previous case, one of the main constraint is BS mixing. And if you pass this bounds, you can also may hope to try to do with a loop of the same particle, you may try to explain the G minus two. So in models with loop mediators, you can try to find a nice connection also with the G minus two. And the fact that there is a loop which suppress the rate is telling you that also the mediator in the loop now kind of be very heavy. So maybe you can target them as you can really try to hope to see something at the DLHC in the direct production. So I moved to the conclusion. I think it's still premature to claim a discovery of new physics in the Mason decay. So it's premature, but it's really exciting that all the current alumni in the BDKs have a simple and consistent interpretation at the level of effective field theory. There is a very nice correlation among all of them. And it seems that new physics explanation is possible. After the measurement of Arquestar, various conclusion that before you were able to draw only using this dirty channel, let's say, now you can draw using only clean observable. And the standard model deviation is at the level of, the standard model is away from the data at the level of roughly four sigma. The anomalies in this transition, in the BS transition can be explained with simple three-level exchange of laptop or can NZ prime boson. And then you can also start to think about more ultraviolet complete theory when you also tend to address flavor. Now a good point about all the story, in my opinion, is that we have data. So it's not like the G minus two that you have to wait for a long time to repeat the experiment and so on. Let me just tell you that we already have run two data recorded. So we just need a LACP to have a look carefully to the run-to data and show the results. And already we will have more interesting things to say. Also, not only we can update what has been measured, but you can also measure also other kind of ratio which are similar in spirit as RK and Arquestar. In my opinion, the true final verdict about this full story is that we need an independent experiment to check for this anomaly. And likely we have an experiment which is belt two, which is going to start to take data optimistically in one year. And then after a few years, we will start to really get an independent check from a different experiment. Really at that point, I think we can really have a final word about all this story. Okay, for me, that's all. Thank you very much. Thank you very much, Marco, for this super nice talk. So let me remember you guys that you can ask questions to Marco via this Google Q&A system. So please don't hesitate to do it. So are there questions from the audience to Marco? I'll start with basic questions then. Sorry, Amelie, I saw that you were going to start with a question, but I'm going to put a video to it. So no, it's been a very basic question. So the reason why it's a set prime, one of the candidates for mediator is because if you put in a scalar, it would give you some other sort of operator, right, effective operator. Okay, so you're asking if, okay, the point is, let us take the laptop work. So if I take a specific quantum number of the laptop work and they take us one specific laptop work, then I can generate only one specific Lorentz structure. Then agree with you that if you take the set prime, I don't know if I understand correctly your question. Depending on the charge assignment, you can also have not only some operator with some specific hierarchy, but you can have whatever you want. I agree. It's much easier if you think about BSM to assign things in a chiral way, but it's not a no-go theorem. There are various options. But with the laptop work, one single laptop work, you select one specific hierarchy. With the set prime, you are right, you can generate more different chiral structure in one shot. So no, the issue was, so maybe you can go back to your slides. We don't see it right now. You can show you. Sorry, sorry, sorry, very sorry. You are, okay. Can you see now? No. Not yet. Okay, okay, okay, sorry. Yes, go ahead. Okay, so yeah, okay, so the one on the right. So you have a vector, right? I mean, so the question was, if you put in a scalar in there instead of a vector, so that would lead you to another sort of effective operator, right? Which wouldn't be favored by the fix. Okay. Why did a vector and not a scalar? Well, in the laptop work case, you did consider a vector or a scalar. Okay, let me just understand. So if you add a scalar, in general, there are the interaction you want, but there are also more interaction. In particular, there may be very problematic for proton decay. So in that case, you have to protect proton decay, but from, so yeah, maybe you have to switch off some cabling which you don't want. For some of these laptop work case, that is not required. And this true, you can generate more structure, but the one relevant for this process is just unique. There is one to one correspondence between one to number of the mediator and the other one for the scalar laptop work. For the vector laptop work, you can fierce the things. There are more options. You can have, you see there is this U3 and U1, you have more options. And then let me also add that if you start to add a vector laptop work, automatically your theory is not renormalizable because you get the spin one particle into the game. So automatically you have to extend the model including more ingredients. So that the situation can be much, much richer. Yeah. Okay, thank you. Okay, you had a question, Avelino, I think. We cannot hear you. Avelino? We cannot hear you. No. Okay, are there more questions? Meanwhile, Avelino will try to arrange his mic for problem. Can you hear me? Yes. Yes. Oh, very good. What I find particularly interesting is the part that was a bit short in the very end, the way to concrete models. Because I heard more talks about this similar topics and it was sometimes emphasized that in some of the realizations, the limit, there's an upper limit for the mass scales of the new particles being the laptop work or there's that time or so, that can indeed be probed by the LHC. Or that even with, I don't know, 100, 300 percent of them, these models, these explanations could be ruled out. So it's nice that this is to see the general idea that there's no physics probably, making it concrete, I think, is a very crucial step. And I was wondering, you were mentioning that there were so many different articles and papers. How many really concrete models are there and can some of them escape the LHC, the high luminosity LHC, et cetera. And can most of the models be probed there? I don't know, this would be very interesting for me. Okay, very good question. So let me go back to this slide here because I think there are really two different sets of anomaly and then depends what you are talking about. So the first set of anomaly, which is this one, what is the flavor changing charger current, this one, I really take your point, it's really as you are saying. So if you start to look to serious realization, then you start to really eat the direct searches in a cruel way. The reason is that because the fact here, the fact here has to be something of order of two TV, one TV, so I agree with you. In reality, I didn't see any complete convincing case and also myself, I never try to work directly on this because it seems very tough, I agree with you. But it doesn't mean anything. But really, I really see the point you are saying, I really share with you, which is very different. But for the other class of anomaly, then in that case, again, you see that the overall effect which is here is not two TV, is not one TV. It's really 30 TV. Well, this depends what you assume for the lambdas or for the deltas. No, no, the point is that the feet, the feet automatically tells you that if you normalize the four firm interaction with a scale, so you need 30 TV is enough. So then you see, if you have a coupling of order one and a mediator of order 30 TV, so clearly above the direct searches, then there is no problem. And let me stress that for laptop works, depends on when you say concrete models, which is also very interesting to discuss. For laptop work, you can cook up a renormalizable model. Renormalizable means that in principle, you can extend to very high energy. And then it's fine, you see, the reason is because the effect, the overall effect in terms of overall scale is small. Nevertheless, you see an effect, you see an effect only in direct probes and not in direct probes. But then if you are more ambitious, and then I start to be back with you again, if your flavor structure here, you start to cook up something which is connected with the standard model again, so you have nice answers for the flavor violation. Then you pay suppression in the coupling and then you are going back to the electric scale or the TV scale. So the question depends a lot on your flavor structure, but I would say that for this flavor change in neutral current anomalies, really LHC plays an important role for the model building, but it's not a complete killer. While for the first class, then it's direct searches are really, really severe, yes. Of course, you want to explain everything with one model, yeah? Ah, okay, yeah, very good, very good. Your explanation helped me already a lot, yeah? I, yes, yes, very good. It's very, very interesting because the paper said, yes, the very ambitious person wants to do everything in one shot. If I have to, if I have three anomalies and I have three explanations for them, then I can continue like this for all kind of other anomalies that are on the market. But if there's one that captures them all, yeah? I agree, but I agree. But what I want to show you that here is not one anomaly. In this BS, new, new story, it's a list of anomalies. So just if you take, forget about the one before, I think there is enough observable to make the story interesting. Maybe the other one in in-charged current will have more subtle story behind, but this one, it's not only one observable, but I agree with you. If you want to explain everything in one shot, then you buy the problem, which is coming mostly from the charger current anomaly. Yes, but I think it's a good question because clarify a bit how we think about model building in this story. So it's, in every paper, you have to be sure if it's discussing one anomaly, one class of anomaly, or also all together in one shot. Yeah, okay, thanks. Thank you. Okay, there's another question now from Avelino with Rishai, so he doesn't want to talk. So he's asking, since you already prepared something, could you tell us your opinion about the low RK star beam? Ah, very good, yes. So this is a slide which was ready. Okay, this is slightly more technical, but it's important, so I'll try to summarize in two minutes. So the experimental collaboration reported the results, in two beam, in the very low Q-square, this one on this axis here, 0.045 and 1.1, and the intermediate range, 1.16. So what happens is, restart from the standard model. In the very low Q-square region, there is a contribution from the photon. So you hit the photon pole, and the photon pole is known to be left on universal because the photon couples to electron and muon with the same strength. So which was the expectation is that in the low Q-square region, even if you have new physics in the intermediate part, then you expect the new physics effect to be soft then. Okay, so what we are observing is a deficit which is quite remarkable also in the low Q-square region. And in this plot on the right, you see the correlation between RK star and in the large beam and in the low Q-square beam. And then you see the various new physics hypothesis. You see that all of that tends to predict something closer to the standard model in the first beam. So there is a tension if you want very little between the data point and the various models. So all of them, including the standard model. So this can be an issue. So if suppose that you collect more data and the error here shrinks a lot and then you cannot go back here to the models or to the standard model, then this can be a really an hint that something is not going well with the measurement or if you are an aggressive model build there, then the modification you have to do is with the new physics with the long range. So you have to modify, you have to put something at low Q-square, a particle of the below the Jeff scale in order to modify the effect there. But the present status, which is this one, I wouldn't say there is a big discrepancy, is definitely a place which will be super important to, for the sanity check of this observable and will be definitely very important to see the fate of what's going on in this low Q-square beam. But the message where some theories were afraid that such big effect in low Q-square beam if you are a little bit more quantitatively, you can see in this plot the discrepancy. There is a little discrepancy, but it's slightly more than one sigma, you are back to the possible new physics explanation, assuming just one new physics in one with some coefficients. Very good. Are there more questions? So there's one question in our live chat from someone called ER4000. So the question I think goes in the same direction of what Sven just asked. So the question is, the Z prime mass depends on the values of the deltas. Can you tell us which is the Z prime mass for a typical Z prime model? Ah, okay, very good. What I can tell you is commercializing one of my work. Okay, there are various work. This is not the best one, but it's just because I have it ready. So what you can do is to start to cook up some flavor models where the flavor violation in the standard model and the flavor violation in the standard model are connected. Now at that point, you see that the flavor violation is suppressed, it's not an order one coupling, but then you start to see something connected with the CKM. This is lambda, it's the kibble angle, it's a parameter which is over at 0.2, and then you get entries which are suppressed in the quark side of the story. And then in that case, you see that the Z prime cannot be arbitrarily low, and then you see that you start to have problems. We didn't update this plot with the new data, but direct searches now extend much more deep into this region, and then you have, the other option is to start to pump this coupling here, and then the situation starts to be problematic. But I mean, I think it's right. As soon as you start to connect with the standard model, you lower a lot the scale, and then you start to eat the things. And let me again emphasize that here, just for this model, I'm only taking into account the anomalies in the charged current. So if you start to address also the neutral current, if I start to address also the charged current, it's very difficult for me to explain it. Also, there are other ideas behind the flavor symmetries that is this idea of partial compositeness. I don't know if you heard about this. It's a model, there are models where the Higgs is composite and mixes with the standard model fermions are a mixture of elementary composite state. And then the fundamental parameter, there are these mixing angles. So you will have this mixing angle which are connected to the Yuccava structure, are also connected with the flavor violation of this anomaly. But again, you pay the price of the CKM structure and the hierarchy in the standard model, and then you are back to something which is close to the TV scale. So the message I think is still on the model building side, especially if you want to address more deeper question like this one, there is space to think about it. Then if I may add one comment about, but I don't have the slide, is that such big framework like supersymmetry without introducing extra states, it's impressive that just with a bunch of measurement are really in trouble. In that case, the problem is really that you cannot explain the anomaly, not even thinking about the bounds from direct production. And the reason is because the flavor violation you get is always associated with cage coupling and you cannot pump a lot the cage coupling. So this is a crash overview of BSM option. I'm very sorry, which I didn't have time to talk about everything. Okay, are there more questions from audience? Yes, you brought up the topic of Susi. Is there anything on R-party violating couplings? This is a complicated question. No, because R-party violation you pay, it's complicated because also with R-party violation, R-party violation, what is in the very end is from the point of view of simplified models, is just the standard model where you extend with some laptop works if you want to see the effect here. Now the laptop works at the three level, which you have from the MSSM are like this bottom or the squads and so on. Unfortunately, this one, they don't have the right quantum number to mediate the effect at the three level. Okay, it's about to do loop level. Again, you have to pump the coupling really, really strongly. And then you have to see if you can explain the anomaly, but then you have to pump the activity coupling a lot and then you eat again the direct searches. So, but it's, in my opinion, it's still not 100% clear if R-party violation may play a role. You can play. You can play. Honestly, I'm also playing, so I don't want to tell you more. Okay, so my last question is, what should we look out for in Bell 2, assuming all of this is correct? Is there something that LHCB cannot see that Bell 2 can see that would be correlated to this operator? Very good question. So the Bell 2, first of all, can repeat the story in this channel, in the channel which there is an anomaly, and they can also study ratios, but this ratio can be studied also from LHCB. What you really gain from Bell 2, my understanding is this one, is that if you see something in charged lepton, if you see something in charged lepton, then you start to think about what you may see in neutral leptons, namely in neutrinos in the final state. So in neutrinos in the final state, you need the really full reconstruction of the event. So in that case, Bell 2 is a machine which is not ironic, so you get much more control. And then what you gain is a better sensitivity on what is called, unfortunately, with the same name, RK-star Nunu and RK-Nunu. Okay, so you look to deviation in this channel compared to the standard model, where in some model you can have a big effect, really deviation of order one compared to the standard model. So the present bound is about four times the standard model, and Bell 2, you can reach a sensitivity which is up to 50% the standard model. But this kind of measurement, P2K Nunu and P2K-star Nunu is something that at LHCB, from my understanding, you cannot do. So Bell 2 will add its own words on what we already seen, plus there are lots of these channels. Super, thank you very much. Okay, so thanks Marco a lot. I guess there are no more questions. Thank you very much. Or are there more questions? No more? Okay, so thanks Marco again and all our viewers. So let me remind you guys that next week you have another webinar by Martin Spinovac from Kaz Huwe. He will talk about neutrino mass sum rules, some rules. And so thanks a lot and let's meet next week for another webinar on this series of Latin American webinars of physics. So thanks Marco. Bye bye. Ciao, thanks. Ciao.