 Okay, I think we are live. Yes. So, hello, everyone, and welcome back to the Latin American webinars on physics. I'm Joel Jones from the PUCP in Peru, and I will be your host today. This is webinar number 140, and we're having Admir Grello as a speaker. Admir carried out his PhD at the University of Ljubljana in Slovenia, which was followed by postdocs at Zurich and Mainz. He then became a senior research fellow at CERN, then an SNSF Excellenza professor at Bern, and has recently moved to a tenure-track position at BASA. So today, Admir will give us an update on the BMSO on anomalies focusing on their interplay with high PT observables, and we're very happy to have him as a speaker today. So before we begin, let me remind all viewers that you can ask questions and make comments via the YouTube live chat system, and these questions will be passed on to Admir as usual at the end of the talk. So please go ahead and share your screen. Okay. Can you see my slides? Everything good. All right. Well, thank you very much for this kind introduction. Today I want to present a work that resulted in an archive preprint, number 2212. You can see it here, and this is work done in collaboration with Jakub Salko, Alex Molkovich, and Petr Stangl, and the title of the paper is basically Rare BD Case, Midtime Azraelian. So here we deal with flavor, and to start with a very general introduction, flavor is this mysterious property of matter where we have copies of the gauge representation of the fermionic fields. Actually, we've got three copies, and we have no idea why. It is actually, in my opinion, the most mysterious aspect of the standard model, the standard model being quite compact. It's a quantum field theory that's got certain symmetries, the space-time symmetry and the gauge symmetry, and the field content, particularly the matter fields. The fermionic ones, we have five gauge representations shown over here under the input symmetries, but all of them have three copies. This index i represents flavor and i runs from one to three, and this is the aspect that brings in complexity. So this complexity is reflected in the number of parameters of the theory. So if we just focus on the standard model that contains space, so space-time-engaging variant operators of mass dimension 4, we've got five parameters in the gauge and hick sector, model of the theta terms, while in the yukava sector we have 13 parameters. So we have much more parameters in the yukava sector responsible for flavor in comparison with the gauge and hick sector. So if there was only a single generation of fermions, we would only have three parameters. So the fact that we have 13 has to do with this, the fact that we have three generations. And this thing actually proliferates a lot if we go, if consider the standard model effective field theory. So we think of the standard model as only being a low energy limit of some more complete theory. And at low energy, the effects of this more complete theory, which sits at high energies or short distance scales, a microscopic theory at short distance scales would manifest itself in terms of higher dimensional operators added to the standard model, so operators in the standard model effective field theory. And here we have very quickly a proliferation in the number of parameters. So considering dimension six operators, which conserved barrier number, there are 2,499 of these operators. And if there was a single generation of fermion, this number would have been 59. So you see flavor introduces also complexity as we go beyond the standard model. So it's not only about the counting of the input parameters of the theory, it also got to do with the actual observation of these parameters. So we have observed those in the standard model, which would be the upper part of this plot. So the charge fermion masses and mixings. And also, there's some very good data on neutrinos, which could in this picture be attributed to a dimension five. So the leading higher dimensional operators in the standard effective field theory, we consider with this field content and considering here, and due to the Weinberg operator. And we see very some interesting patterns here. For example, in the in the in the in the quark sector and charge lepton sector, we see hierarchy in the in the in the masses between different generations. And the the CKM mixing matrix is almost a unit matrix with small elements of diagonal elements, implying that quark, let's say quark flavor is an approximate symmetry. And the neutrinos instead behave very differently. And so so so these these patterns is something which is empirical, which we observe, and which we call a flavor puzzle. And we think that this is a hint for physics beyond the standard model. In the context of the standard model, we cannot address these things that input parameters, and hopefully beyond the standard model, there is a theory which explains, which explains these these weird features. So so however, these these this this puzzle, these patterns that are observed in the in the permanent masses and mixings, and is also in some sense, a blessing when it comes to to the phenomenology. So patterns can be associated with symmetries. And in particular, these patterns give something that we call approximate flavor symmetries in the standard model. So the fact that my masses are such and such and the CKM is such and such gives rise to approximate flavor symmetries that that are help us understand and work out the standard model phenomenology. And well on examples are isospene, SU3, heavy core symmetries, the G mechanism, and so on. And they allow us to simplify, simplify the mathematical phenomenology. But also this this approximate flavor symmetries can be used to formulate stringent tests of the standard model. And when we we can formulate these very stringent tests in such a way that even tiny new physics effects could be detected, this will give us a window to new physics and types of stringent test that I'm going to talk today about left on flavor universality is a good example of those on relying on this approximate flavor symmetries on the standard model. So in the standard model, flavor is approximately conserved. But if you go beyond the standard model and let's say already at dimension six level in the standard effective field theory, there are plenty of new sources of flavor violation. So if you assume a flavor anarchy, meaning that all these coefficients are a quarter one, new parameters, then the flavor observables like neutral meso mixing, charlotte conserved violation, electric dipole moments, they already exclude they already exclude new physics up to very high energy scales. And these scales, these exclusions are much more powerful than what one can achieve from high PT physics, meaning from Higgs, top electroweak or direct searches. So only if one assumes a flavor protection in the new physics sector. So assumes a flavor symmetry, which is minimally broken by the known sources, but you cover by the cover coupling, you cover matrices, the same sources in the standard model. These stringent bounds can be relaxed and one can give a breathing room to new physics, new physics at the TV scale. So this illustrates the importance of flavor physics probes, and also illustrates the importance of flavor symmetries, the approximate flavor symmetries as a tool to organize phenomenology. So this this table here shows how one can use flavor symmetries to to charge the space of standard model effective field theory. This comes from a work with Anna Seler-Thonson and Ivin Palovich, where we where we looked, we counted the number of independent parameters, which assuming different flavor symmetry in the quark sector and in the lepton sector. So starting with the MFB case, which is shown over here in the top left corner, where we assume U3 to the 5 flavor symmetry, the largest flavor group. And here we only show the leading terms that are flavor symmetric, not including the spurion corrections. And then relaxing this symmetry systematically, so looking into the U2 and U1 subgroups, all the way down to the no symmetry limit, which is this 2499 operators. So this gives us a systematic way to organize the flavor of this map to the standard model effective field theory, dimension six, and to study phenomenology and to study the interplay with the with the high PT physics, top peaks and electroweaks, etc. All right, so this was an introduction to flavor and in particular flavor beyond the standard model. And now I go into discussing the observables, where be the case that are very important in constraining short distance new physics described by let's say the standard model effective field theory. So where be the case are very rare in the standard model, and they are also very well measured. So these two features, the fact they are very rare, that they are very well measured, they make makes them excellent for new physics. So for instance, at the moment we have a sensitivity to three level mediators with order one couplings and mass of a 40 TV, for example a laptop work with order one couplings sitting at 40 TV, has no chance to be detected directly at the LHC, it is simply too heavy to be produced on shell. However, it could indirectly lead lead to an effect in these transitions, and in such a way that they might be indirectly detected. So what's the reason behind this? Well, first of all, these are flavor changes into occurrence, which do not occur at three level in the standard model, and also there are approximate flavor symmetries. So here we've got the G mechanism, and particularly there is a large breaking due to the top quark, which then forces this Hamiltonian to be proportional to the VTS, so the CKM involving the top quark, which is a small parameter, the CKM element is 10 to minus 2. So this has a loop plus the CKM suppression over here. So in this Hamiltonian, I have in addition to G Fermi and these two factors, there is also a Wilson coefficient CIs and some operator O, and here I'm showing O9 and O10. So these are semi-laptonic operators between a strange quark and a big quark, lepton, anti-laptons. So on the quark side, they are either left-handed vector currents or right-handed vector currents for the primed operators. And on the leptonic side, they are vector currents for C9 and axial vector currents for C10. So these two operators here are important, we'll play an important role later on. In the standard model at the BMA scale, the value of these Wilson coefficients can be computed by from the matching this loop and so on. And this is roughly C9 equals minus C10, it's about 4.2. So a new, a short distance, new physics would correct these Wilson coefficients here. So in more details, more generally, how we do the theoretical predictions of V2DK amplitudes, we rely on effective yield theory factorization. So the problem can be split into computing short-distance contribution and a long-distance contribution. The long distance is due to QCD, due to strong dynamics. And here we are dealing with computing hadronic matrix elements. For this talk, particularly important will be lattice QCD and QCD sum rules, sorry, light consum rules and QCD factorization, while, while the, the short-distance contribute. So these hadronic matrix elements are, are not polluted by, can, they are, they are solely determined by QCD dynamics. New physics would come in Wilson coefficients. A short-distance new physics would, would come in Wilson coefficients. So this short-distance contributions are likely perturbative in a sense that we could, well, okay, new, new physics model, if the new physics model is perturbative, then we can do a series of calculations of matching it, matching and running calculations where we go through different stages of the CFT, the standard model effective yield theory and weak effective yield theory. And so this program is, is being carried in many references recently. And for example, matching and running program of SMAPT is the accuracy of one loop level. Okay, so for B2SL, we've got two contributions. So semi-laptonic operator is the leading contribution. So this is a, is a, this is a contact interaction when integrate out the loop that I showed in the previous slide, slides. And this, and here the, what we have to deal from the Hadronix physics point of view is local matrix elements. So we have to deal with the form factors. And this is typically easier to deal with in comparison with the, with the, with this type of contribution, the one on the right-hand side. So this is a contribution from four quark operators. So here we, we can generate by exchange of a W at three level, we can generate B strange charm, anti-charm, four quark operator. And then this can lead to a non-local matrix element between the electromagnetic current and this four quark operator. And so this is much harder to treat in, in, in Hadronix physics in QCD. However, this has properties, very peculiar properties in particular, it's laptop flavor universal and it's a vector current because it goes through the photon exchange. Okay. So, so hunting for new physics in B2SL, the case means avoiding resonances. So this four quark operator would precisely give you this resonant contribution if, if the, if, if you're sitting on, on, if the invariant mass of the electron pair is at the j psi that would dominate your standard model prediction. So in these measurements that LHCB is doing in, in trying to search for new physics, typically one is, they are, they are avoiding resonances, they are looking into a low Q squared beans or in high Q squared beans. So the, in particular, the latest RK measurement was done in two beans, an RK star, low and central Q squared that are shown, shown, shown, shown here. So from the experimental point of view, it is much easier to measure muons than electrons and this is self-evident from these two plots. So here you can see a very nice measurement with almost no background of the muons. And while, while here you can see the measurement of the electrons, we have this leakage and we have a, we've got plenty, plenty of, of backgrounds. So if I were to put all the observables on one slide and discuss how, how, how reliable they are with respect to experiment or, or theory, this would be some very, very simplified and sketchy, and sketchy plot where, where on the, on the vertical axis I put experiment divided in whether we only deal with muons or we include both muons and electrons. Or on the horizontal side, end of the horizontal side I have my dividing observables on how clean they are in terms of the standard model theory prediction. So LFU observables are very clean theoretically. So these are the ratio of branching ratios, differential branching ratios in certain Q squared beam of decay to muons divided by the decay into electrons. And in this ratio, many uncertainties cancel out and the, there is only, so the precision, so the prediction for our case basically one up to a percent level due to QED corrections. And this, this uncertainty is much smaller than the experimental uncertainty. So these are very, very, very reliable in terms of theoretical prediction. However, they involve electrons. So one needs to measure electrons due to these measurements. Instead, there are many, there are many of these anomalous observables in the, that involve muons. And these are branching ratio of bis-abest to mu-mu. So this is a, this is a fully leptonic BDK, which is rather clean theoretically. It involves so in terms of the input from QCD. So one drawback is that one needs to know VCB in order to come to make the prediction the standard model, but this can, this can be dealt with. And the good aspect of this measurement is that, for example, CMS is very competitive. It actually has the most competitive measurement at the moment. So it is, we have more than one experiment at play giving, giving competitive results on this measurement here. And it's generated by the axial vector currents in the standard model. So then we move next to other cluster observables. These are optimized angular observables in BDKs to vector mesons. So, and muon and anti-muon. So these are semi-leptonic BKs. So we have B2K star and bis-abest to phi. K star and phi are vector mesons. So one can construct a lot of angular observable distributions and take ratios. And in these ratios, a lot of uncertainties cancel. So it has reduced form factor uncertainties and it, it, it has cancellation of some parametric uncertainties. So these are also quite, quite good observables in terms of theoretical, in terms of theoretical predictions. And probably the, the, the hardest ones are branching ratios of differential branching ratios of, of different B2K, B2K star, bis-abest to phi, phi observables. All right. So what is the status of the experimental measurement? So we have anomalies in differential BK rates of B2S mu plus minus and in the optimized angular observables. So in what is observed is a consistent deficit of muons in these BKs. So the consistently smaller measurement in, in, in, in comparison with the standard model prediction for all these BKs that, that have B2S, B2S, so bis-abest to phi, but also for many others, B2K, B2K star, and so on. So optimized angular observables, I said are, are better theoretically, under better theoretical control than the differential branching ratios. And so this is, these are sits done by LHCB. And the standard model value is here for the vector coupling C9 and they consistently prefer a smaller value, something like minus 20% or so, a smaller value of the vector coupling from, from the one predicted in the standard model. The interesting thing is that there is a coherent and consistent pattern. So if you consider this, this presence of new physics in this, in this effective coupling, there is a consistent set of deviation. However, as I pointed already, there is a lot of theoretical debate of whether, whether we have a good control over the non-local matrix elements. This thing that is usually donated with charm loops, but is, is actually a non-perturbative effect, whether it can mimic this large shift. Actually, it's an open question. We don't know of a mechanism that could do such a large shift in QCB. So the question is whether it can or cannot cannot explain this, this effect over here. So what, what really boosted the whole story out of, you know, all, all proportions was the measurement of previous measurements of lepton flavor, lepton flavor universality. So those, those are pristine observables theoretically and they're one of their LHCB reported an evidence of, an evidence for new physics from breaking off universality at the level of 3.1 sigma. So this was actually published in nature physics last March. And what was very exciting was the fact that these deviations were consistent with once in the other, in the other sector. So these LFU measurements were consistent with the measurements with the tensions observed in optimized angular observables and differential decay rates. And this was just mind blowing. However, unfortunately, said news for new physics came last December where LHCB has updated their measurements RK and RK star. And in particular, lepton flavor universality ratios are now standard model like. So this is the plot from these two papers, which shows the measurement RK and RK star in different Q square beans, perfectly spot on of the standard model. And so, so the reason for this big change was in depth revision and understanding of electron misidentification. So this solved sort of one anomaly, these LFU anomalies. Basically, it was an experimental thing. Better understanding of electron misidentification. However, we are still left with the rest, with the rest of the, of the anomalies in this sector and beta SLL sector. So today, I'm just talking about anomalies in beta SLL. I'm not talking about charge currents B2C. And so, so he, to resolve these, one really needs to understand better the standard model prediction. And the bottleneck are the non-local matrix elements. And the, the, the, the question, the standard method to do this is QCD factorization. And there is a question about systematic uncertainties in particular about effects that go beyond QCD factorization approach. And there have been a lot of discussion in the literature. And so I would want to mention this recent work by Danny Van Dikken collaborators where they try to estimate the, the effects beyond QCD factorization, trying to make use of analyticity and unitarity of the, of the matrix elements and being formulating, formulating this z-expansion, which is consistent with the, with the principles of fundamental principles QCD. And then using, introducing many free parameters and then using data and calculations at negative Q square where they are reliable. And some bound, dispersive bounds in order to close the fit. So what they find is shown in this plot here. And basically, they find a very similar result to QCD factorization, except for a bit larger uncertainties as one approaches the JPSI. However, in terms of the tension, they find that the tension remains, that they cannot explain the effect in P5 prime, in optimized angular observables to be due to this effect. Of course, there are opposing viewpoints in the, in the, in the community. For example, the work by recent work by Sylvesterinian collaborators, where sort of they acknowledge the work, the previous work, in terms of estimating contributions beyond QCD factorization coming from this type of, this type of contributions. However, they point out that there could be effects that go beyond that. So something like re-scattering of D mesons and so on, they do not give any calculation of these. They just point out, they just point out this thing. And in fact, here, they have these two scenarios. One is where they assume something like the previous work that QCD factorization, the corrections beyond this factorization are subleading, then indeed they also see a need for a correction to C9. However, if they have a more general parametrization of these, of these potentially new effects, which they do not calculate, but fit from the data, then they clearly don't see any, any need for, for having new physics. So, so it is a very difficult, I have to, I have to say it's a very, very difficult thing to, to assess at this point, and it's also not clear how fast we will make progress on this front, but it's something that is very important and needs to, a problem that needs to be, that needs to be tackled. All right. Okay, so what I, what I want to do for the rest of the talk is that I want to consider, I want to ask a different question. So let's assume that QCD factorization is, works well and the corrections beyond QCD factorizations are insufficient to explain the effect in these B2S new, new observables, that it is, what, what new physics can do in particular, given that LFU observables are now standard model like. So how does the new physics landscape change in this, in this respect? So we were doing, in the first part of this paper, we were doing fits in the context of effective theory. So a weak effective theory Hamiltonian is given in this line over here. And these are the operators that we consider. So we basically look, look into semi-laptonic operators 09, 010 and others shown, shown in this slide. Okay, so this is the first fit we want to show. This is the, a fit to two Wilson coefficients C9 and C10, where we assume real coefficients. And these are the corresponding operators. And here is the preferred region from B2S new, new. So from, from optimized angle observables and the branching ratios from angle observables and branching ratios. And that one is away from the standard model point, which is this one over here. Then there is RK and RK star in blue, which is now consistent with the standard model. And then there is B sub S2 new, new. Okay. So here one can see a slight tension between B2S new, new and LLQ. So here I'm only considering new physics in nuance and assuming no new physics in electrons. We have, we have in the appendix of the, of the V2 of this paper that we put out today, actually on, on, on archive we have a table like this with a lot of one dimensional scenarios. So considering only one Wilson coefficient at a time and performing fits again, separating by bit as new and LLQ and where one can, one can try to see what one can compare to these things. In particular, the best performing ones are the LLQ one. So meaning that contribution to electrons and muons is the same. And in particular C9 LLQ university, C9 universal gives a pull away from the standard model at the level of 3.7 sigma. Also C10 universal gives a good, a good description of 3.5 sigma. And in particular, there is a very small but, but not, not very significant tension between so, so here B2S to new, new plays a role. You can compare these two numbers. I mean, they are, they are compatible within the, the, the margins. But, but this is the reason why the significance is somewhat reduced in comparison, which is very marginal. However, the C9 equal minus C10 is also, is also a possibility. I have to say that the, the, the pull of the, of the global fit for only C9 is also not, not, not, not terribly bad. However, there is again this slight tension. If you look into the preferred parameter space from bit as new and, and the best, from LFU from RK, RK star, there, there seems to be some small, small two sigma ish tension between the two. So this plot here shows a combination of LFU versus LFUV. So LFU is the universal contribution, either C9 here or C9 minus C10 over here versus LFUV. So only contribution to muons. And from this plot, you can see that new physics once would like to prefer only an effect in the universal one and basically no effect or very small effect in the, in the muons. Okay. So this plot over here shows a comparison with B2D transition. So in fact, in this work, we considered all possible combinations B2S, either muons and electrons or B2D, either muons and elect, or electrons. And, and this is, I think just a very nice comparison to see how, how, how better the situation at the moment is in, in, in B2S in terms of precision compared to B2D. So this is the standard model point, but there, there is a lot of uncertainty. So it's still not very, as, as precise as the, as the other one. Okay. So connection of all of this with, with high PT. So if we are working in the context of the standard model effective field theory as, as a framework to describe a micro generic short distance new physics, then we also expect to have correlations, correlations between observable, different observables, the flow energies and, and so on. So a useful tool is to, to construct a global likelihood. And this is a program that's done in these references, where one is trying to put all the data data together and construct a global likelihood as a function of the Wilson coefficients. And then do fits. So here, for example, one, one includes flavor data, electric precision observables, and so on. So the work that we have done in these reference here was actually the main, the bulk of this work was to actually implement high mass valiant in, in, in Flavio and do a systematic study in the context of standard model effective field theory. So here, we, we consider some heavy new physics that is heavier from that cannot be produced on shell in, in proton collisions at the LHC, it's heavier than that. And then its effect will lead to a contact interaction, let's say a semi-laptonic contact interaction, which then we would, would be observed in the tails of the high mass delian. So producing laptop pairs at high energies. So this plot over here shows a distribution in the number of events as a function of the dilapton variance mass. And this is, this here is, is some Z prime benchmark, which is very heavy, more than five TV. So it's, it's, it cannot be observed as a peak. It's, it's too heavy. And however, in this region over here around below two, two, so the region below two TV, actually you can see that it's well described by an EFT. So EFT can very well describe this negative interference of this benchmark with the standard model and have some sensitivity in this, in, in these tails. Okay. So what we did was basically to consider the latest data from Atlas CMS and so on the neutral and charged current delian and put a limit on, on an effective field theory. So this plot over here, for instance, shows a bound on the effective scale. So lambda over per root of the absolute value of delian coefficient as a function of, of the highest bin that we include in the analysis. So this, this, in this way, we can have sort of the control of, of effective field theory expansion. So if we consider, let's say the highest bin to be about one TV, we already saturate the sensitivity and this is applicable for new physics heavier than one TV. So different curves here represent different benchmarks for new physics. So these are the operators shown in these, shown, shown, shown here. For example, this one, one label is for the quark flavor. And this is valence quarks. And two, two label is for lepton flavor, which is new ones. So this would be, for example, you, you are going to DD bar, you, you are going to new new new plus new minus. And here we have BS. So it's BS going to new new. And this is BB bar going to new new as you can understand also different sensitivity. The first three are due to valence quarks. The last two are due to C quarks and C quarks are suppressed, have suppressed part on distribution functions. And with respect to the valence quarks. Nonetheless, the scale that one can achieve, they are going to move in several TV, several TV range, which can be very useful for many models. So the prediction of Dragan in this math, there are a three level, there are a number of operators that many operators that contribute, the leading effect is due to four fermion operator, this amplitude scales with the energy squared. While other other operators, they give a softer dependence on the energy and they are sub leading. So these would be operators that correct the gauge boson vertices. And they are also constrained from other from other from let's say on-shell production and so on. So we consider only the four fermion operators for this exercise. However, still there is a huge number of them. So if one imposes no symmetry, there is like 855 operators. And with imposing flavor symmetry, there are less less. So this is the counting shown in this table. Okay, so here we are looking into first set of results shown in this table. So here we are considering one operator at a time. So we switch on at some high energy scale, let's say at one TV, we switch on one operator and one Wilson coefficients. So here it would be to B to D electron E plus E minus. So we consider all possible cases, B to D, B to S electrons and muons and all possible semi-laptonic operators and switch only one coefficient at high energy scale and then see a complementary bounce between Drellian and BD case. So here you can see in this table, due to lack of time, I won't explain it in details, but the main point here is that in such a case where we really have just a minimal operator that contributes to BD case, then the constraints from BD case, rare BD case are by far the most stringent. So the constraints from high max Drellian do not compete and this is as expected. However, in realistic new physics models, we typically do not get only one flavor cup, only one coupling switch tone, another set to zero. For example, if you have minimal flavor violation, a more involved structure than in that case, the Drellian becomes very competitive. So here is an example of two-dimensional fit. So the horizontal axes are the singlet operators. So one here stands for SU2 singlet contraction between leptons and quarks and triplet is the vertical axis. So it's a triplet contraction between lepton bilinear and the quark bilinear. So here, actually I am imposing, I'm saying that this flavored tensor has a minimal violation. So the first term would be flavor symmetric delta function. And the second term would have insertions of the yukawa couplings, YU and YU dagger, for instance. And let's say I stop my MFV expansion at this point and I'm left with two independent parameters, this one and that one, for the two operators. And so this plot here is when I assume that two of them are equal, so that the leading term and the next leading term, they have equal sizes, also both for triplet and for singlet operators. And this is the fit. So here you can see in orange, B2S mu nu, and the high mass Drellian is actually very close to the origin. Let me zoom in. So if I zoom in, here you can see constraints from high mass Drellian in green and purple from neutral and charted currents. And in orange is B2S mu nu. And there is a tension between these two regions of parameter space. So actually, this tension is even worse if you go to the linear minimal for a violation case. So linear MFV is when this term in the expansion is actually even smaller than this term. So because if you want to have a meaningful expansion, yukawa top is another one parameter. So there should be an extra suppression in these terms. So if we go to that limit, this tension increases. If we flip the sign, then the detention is reduced when we have coefficients of the same size. But again, if we go to a linear MFV limit, the detention will come back. So this was just an example to say that in realistic models, there will be a competition, there will be a complementarity between the constraints from where be the case and from high mass Drellian. How much time do I have? 15 minutes would be okay. Okay. Thank you. All right. So this is the last part of my talk where I want to give a few model examples to illustrate these points. These will be LFU models. So the models that predict the same effect and nuance and electrons. I will consider generating B2SL local operator at the UV matching scale at three level. So a three level generation of this operator at high energy scale. And this is not the only way one can consider other options. For example, loop generated or maybe RG generated and so on. However, this is the simple and the starting point one can do. And here one can do it with an exchange of a Z prime or with an exchange of a laptop work. So Z prime is, you know, the common Z prime models, completions of a Z prime are typically laptop flavor universal. So this is like example of this would be gauging B minus L. And typically Z primes would couple universally to electrons and neurons. Another example would be that that we will use is an example of a coupling to 3B3 minus L where B3 is the barrier number for the third family. And here we will have a reduced constraint from Haima Zellian as I will show later. Laptop works instead are not typically laptop flavor universal. So if you have a single laptop work, a single laptop work field, which couples both to let's say electrons and two muons, then there would be charged laptop flavor violation. So if a laptop or couples both to electrons and two muons, looking at this diagram, so both electrons and two muons, a single representation, then there would also be, that's, there would also be the case to B plus, sorry, the case to E plus mu. So charged laptop flavor violation where one closes the loops, one gets mu to E gamma or mu to E conversion and other other processes. And we have a discussion on this and it turns out that a single laptop work couples both to electrons and two muons doesn't work. So an LFU laptop work would be something else. It would be to at least have a doublet of laptop work. So here I'm considering this representation color triplet, weak triplet with the hypercharge one third. So color anti triplet. So this is called typically S3 in the literature. And I consider two of them. So one will be charged under the electron U1 and the other one under the muon U1. So carrying an electron and a muon number. And there will be a Z2 symmetry, like a parity that exchanges electrons to muons, a Z2 parity for laptop play universality. So such a symmetry will ensure mass and coupling the generacy and it will ensure that there is a same contribution. When one integrates out the laptop work doublet, there will be a same contribution to muons and to electrons. It will ensure laptop play universality. So these two models would match on these two plots. So in particular Z prime would give C9 universal. So a vector, a vector vector coupling, typically a vector coupling, the ones like B minus L would give you vector coupling with leptons. And a laptop work would give an axial, so sorry, it will be B minus A. So left handed vector couplings. So left handed interactions. And both of them are okay as you can see from these two plots. There are a lot of complementary bounds in particular for Z prime. So Z prime gives other three level effects like it can give neutral nasal mixing at three level. So if one exchanges it between B bar S, one can get BS bar. So delta F equal to transition. But it's now a thing that prime couples also to electrons. It can also give a contribution to lepton production at lab two, for instance, on the four leptons interactions of contact interactions of four leptons, which are constrained very well at lab two, if one also couples to electrons. So you can see that B to SLL diagram is actually the product of the two couplings, the red and the blue. And the red is constrained from B sub S mixing and the blue is constrained from lab two. And so you see that this is very generic irrespective of the completion of the Z prime model. So it should apply no matter how you complete the Z prime. So these are two examples of the completion. You want B minus L and B three minus L. So what you see on these plots are different constraints from different observables. So the orange is B to S mu mu. Blue is delta F equal to so meson mixing. And yellow is lab two, and LHC Drellian is green. So these are the preferred regions at one sigma by different measurements. So the vertical axis is kappa, and kappa tells you how much is the coupling of a Z prime to strange and bottom. So B S Z prime coupling in the units of VTS. And then the horizontal axis is the ratio of the mass of the coupling or the mass of a Z prime over the gauge coupling. So everything is characterized by two parameters in the case of a heavy Z prime above the direct searches. Actually, if the Z prime is lighter, then typically the bounce from direct from resonance searches are very stringent, unless it's very, very, very light. So here I'm considering the case when it's heavy, in such a way that some of the EFT applies. So what is the point of this plot? So here kappa equals zero would, you see that B test mu anomalies prefer non zero kappa. And there is no regional parameter space where all of these meet. So for example, this is the region where B test mu nu and delta F equal two are compatible with each other. This is the region, the decoupling region of the, where masses are heavy and kappa zero is the region where delta F equal two agrees with the high mass Drillian, but you see the tension between them and you see the tension between them and between that region and this region over here. So the difference between these two plots is the fact that on this plot over here, the constraints from high mass Drillian is relaxed because this is a coupling to the third generation of quarks. So a barion number for the third generation. So here indeed the high mass Drillian can be compatible in this region over here with B test mu and delta F equal two. However, we still have a bound from, we still have a bound from lab two, which is the same because it just comes from the coupling to coupling to leptons. Okay, so that's the general conclusion for Z primes and for laptop quarks, actually laptop quarks are easier. So they only give a three level two quark and two lepton operators. And they give delta F equal two. So for quark and for lepton operators, like the lab two, they give it at one loop. So those are suppressed. And therefore, this kind of laptop quark, the LFU laptop quark can easily give this, give this, give this effect, which is compatible with other complementary searches. And here is the preferred parameter space. So you can have let's say, for kappa equal one minus one, which is a natural, sort of natural value for this parameter. The mass of the laptop work over lambda, lambda is this coupling here. You see lambda i are the couplings to quarks and the lepton. Yeah, lambda i have different quark flavors. And if you assume this structure, namely that we have a VTD, VTS suppression, this is a U2 structure in the quark sector. For lambda for the one, this would require something like a laptop work of say almost almost 10 TV, which is also okay with the direct, with the direct searches. Okay, so just to sum up what I did. So for the Z prime, before the RK update, the common models were the one that exclusively coupled to muons. And now the data prefers some kind of lepton flavor universal physics. And if we have a Z prime, which is lepton flavor universal, it also means that couples to elections. So we have important bounds from lab two, and somehow these favors at prime models. Laptop quarks, instead, they, if you have a single laptop quark, that couple both to electrons and to muons, then we have issues with charge laptop violation. So we have to have this delicate global symmetry, which we have a pair of laptop quarks that are mass degenerate and same couplings, one couples to electrons, the other one to muons. And in such a way that yeah, in that case, which can be written mean, okay. In that case, we could have this solution. All right, so instead of concluding, I will give you the output. This is not by no means the end of research in the rare big case. It is clearly a disappointing in terms of for new physics. The release, it also pushes the research in different direction and gives priorities on on on on on different things, which is, which is okay, which is good. And you see the program of experimental favor physics in the upcoming decades. So we've got LHCB running and we have Bell tool, which will be an important player in this field and other other things for charm physics and so on. And for tau physics in particular. And there is also a lot of work for theoretical physicists. And the the main job, the core problem is to do precision calculations of favor observables, both in the standard model and also beyond the standard model in particular in the context of effective field theory in this standard model effect field theory and weak effective theory and trying to combine different observables. So the goal is to match the foreseen experimental precision and then hopefully probably it didn't happen this time, but hopefully it will happen at some point that we discover a deviation from the standard model prediction. And hopefully this will also lead us towards understanding the big open questions of flavor, flavor puzzles. And that's it. Thank you. Thank you very much for this great talk. So now we open the question round. So we encourage viewers to add their questions on the on the YouTube chat. However, since we usually they're usually a lag between the talk and YouTube and YouTube transmission, we'll have questions first from this audience. So I have a couple of questions, but I don't know if Roberto would like to begin. Yes, I have one of the questions that I was wondering if the with the effective operator, can you get some insight about the maybe the discrepancy in the WMAS or something like that? I mean, how so it's a very different possible with the possible. Yeah, it's a different. So the WMAS is a different set of operators. So here I'm I'm so here we are looking into for fermion operators, semi-electronic operator, WMAS would be a different like a T parameter. Sure, there are models which put these things together, but okay, so not very convincing in my opinion. What happened with the with anomalies? Yeah, so so with the anomalies, well, it is always very difficult to bet against the standard model. This is this is clear, but it's always good to look at these things in more details. Yeah, the question was, even though you were discussing especially in the case of leptom flavor universality, I mean, yeah, but there are all the type of U1 symmetries that the people used to use like the mu minus tau or something like that is mostly for the leptonic sector, but this kind of symmetry that are universal would also match in the in the description that the of the standard model I mean to be within the data. Yes, so those would be rather tricky now because those give corrections to arcane. So one has to look into details of the model, maybe they fit within the margins. So let me show one of these. You see arcane now is standard model like, so those models would violate leptom flavor universality. But again, there are error bars on that statement. So the last part of the discussion that I did was to assume that arcane exactly one so that there is no contribution. And then I discussed models. Of course, the data is not perfect. We saying that it's one. So we have some space for it. So it has to be analyzed. One should see also here when I say slight tension. So this is just two sigma. In fact, if you also now look at these fits, so B2S mu mu is 3.7 sigma. So this is actually a bit disappointing to what this used to be. Like if you look at these fits from before the update, these were all above five sigma. Okay. And now they. Yeah. Okay. Now this is not all of them. This one was above five sigma. So there was now these arcs. Yeah. Yeah. Yeah. And one last question when you were presenting different kind of, I mean, for this case, Flavio. Is this Flavio is open? Yes. This is an open source package. Yeah. This is an open source package and you can Is it Python, Mathematica or something? It's Python package. It's based on Python. Yeah. Okay. Cool. Okay. I don't know. Yes. There are no questions yet on the, oh, hang on a second. We just got a question. So from ER4000. So the question is, ER4000 is a name. No, no, no, it's the username. So the question is, the sensation is that the flavor anomalies are going away. Is that feeling right? Of course. I guess. Yeah. Well, if that was not clear from my talk, then I totally failed. Right. So, but it's important because it's progress. So we are making progress. Yes. We are solving problems. We are solving problems along the way and making progress. Yeah. And it was a path that had to be followed. Yes. Yes. I think we have no, I mean, a better alternative than look things into more, into more details, do better measurements and do better computations and then hope that the two would disagree eventually. Yeah. So the main thing here is that if you want to describe the angular analysis, the decay rates, now you do not want to go to the flavor, to the Lipton flavor universality breaking operators because of RK. So of course, the conclusion to that is that now you need two Lipton quarks, two scalar Lipton quarks for addressing these other anomalies. Well, before you, you needed one, right? You needed one scalar for the Muon. For the Muon and one scalar for the Tao that you haven't touched. But so now it's now it's two. So if you wanted to also address the RD anomalies, you would need three scalar Lipton quarks. Is this correct? Yes. So well, it also means like what does address mean? Because if I look at this, the models that I presented in the last part of my talk, they were hypothetical saying what if RK is exactly one? So there is no physics in in that case, what would be the landscape of new physics models? And indeed, in that case, one needs to have a flavor doublet, two Lipton quarks, which must generate coupling, should have some kind of global symmetry structure. The current data, again, is not saying that RK is exactly one. And if you look at look, look, look this Wilson coefficient here, which is just two Muons. The pool of BTS Mu is 3.6 Sigma. This is very bad because it's LFU, but combined, it's not horrible. Sure, there is a small tension. And now we are also talking about three Sigma effect. Okay, so this universal guy is 3.7 Sigma, the one that I explained in the last part of the talk, which predicts no contribution to LFU. Okay, it just predicts to LFU V observables to RK and RK star. It's purely LFU and the fifth is 3.7 Sigma. The charge current anomalies are also 3 Sigma. Yeah. Right. So, no, it's a problem, this thing about the Lipton flavor violation, right? Because if you want, I mean, before, you could avoid that with the Lipton quark. No, but I'm saying if you are willing to hunt this 3 Sigma anomaly, then a Lipton quark, which couples only two Muons and not two electrons, would fit the first line. Sure, it would have a small tension between RK and Vsab S2 Mu Mu, but this tension is at the level of two Sigma. If you assume, as I assumed in the last part of my talk that RK is exactly one, which is not, I mean, maybe that would be the development in the future. Maybe this tension will remain and we do not figure out how QCD can solve this, but RK measurements become better and better and closer to unity, then one would have to go with some LFU new physics. And also, like the models that I explained applies only to three levels. There are other options, for example, which would be a bit more interesting for, let's say, the context of RDRB star. So, one can have, let's say, a purely third-generation operator like BB bar tau tau, and then close the loop, attach a photon, and get an RG effect, right, through, through, through, so, operator mixing and get B2S Mu, but of course, this would be effect which is suppressed compared to three-level effect. So, for the three-level effect, I think it's interesting that if you really want to have, as that prime model then, and you really want to have put RK to one, then you get these complementary bounds from LFU, which I think is something new that, before you didn't have, because before you just coupled to Mu on, so you don't have LFU. Yeah, exactly, exactly. Yeah. So, then, okay, you have, I think you have solved my second question, which was at a, I mean, if this RG effect, this operator mixing, from this operator mixing, you could say something about the operators solving, the RD from the constraints from RK, right, because what I understand, whatever, whatever new physics you put on that, that modifies RD will also affect the, that effect is, that effect is left on flavor universal because it goes through the photon, which is good. Oh, I see. That effect is left on, that effect is left on flavor, although the numbers do not work completely, you know, they are not spot on, but indeed, if you consider a contribution to B2 char tau neutrino, and you consider the SU2 counterpart, which is B2 strange tau tau, and you close the tau loop, and attach a photon, you will get LFU, so C9 universal. Yeah, and that will not affect RK, so that means that would not affect RK, which would put in this context. Excellent, excellent, okay, okay, great, great. Let's see, we have any other question. So, have people thanking you for the talk? Two people are thanking you for the talk? Well, thank you for the patience. No, it's great. It's been very good, it's been very, very nice talk. Oh, one last question, just, you did mention that your standard model effects depend on VCB. Could you remind me what's the status of VCB? Because there was a long issue with the exclusive and inclusive. Versus inclusive, yeah. Yeah, I don't know what's the status of that and how it affects everything that. The prediction of, oh, this is basically converging on the inclusive and the prediction that we are having is closer to the inclusive VCB. Okay, okay. When I said that one could use, you know, the tricks to go around VCB here, for example, one can construct a ratio of this with the Delta MS, being the mixing of Vista-Bes, where VCB would cancel in that case. Right, so then VCB is part of the fit with the. Yes, yeah, so in Flavio, VCB is part of the full fit. So it is fitted together with Wilson coefficients. I see, I see, and that's why you say that it's closer to the inclusive one. Right. Yeah, okay. Okay, great, great. Fantastic. So that's it, I guess. I don't know if Roberto has got any other, there's no more questions from the audience. Right. So thank you very much for the talk and thanks everybody for joining and please stay tuned as next week, no, next week, no, in two weeks time, we will have a Valerie Donke giving a talk. So thank you once again, Admir, for the wonderful. Thank you very much for having me. See you around. Bye. And we are no longer.