 OK, we are online. So hello, everybody, and welcome back to the Latin American webinars on physics. I'm Joel Jones from the PUCP in Peru, and I'll be your host today. So this is webinar 81. And we're having David Schaub as a speaker. David carried out his PhD at TUM in Munich and has gone through three postdocs. The first one was at Scola Normale in Pisa, then in Mainz, and finally back at TUM, where he is currently working. So David will give us an update on this intriguing B-meson anomalies. And we're very, very happy to have him as a speaker today. So before we begin, let me remind the viewers that you can ask questions and comments via the YouTube live chat system. And these questions and comments will pass on to David at the end of his talk. So, OK, having said that, let me give the microphone to David. Yeah, thank you very much. So thanks for the introduction and the invitation, Joel. So let me go right to my presentation and maximize it. So what I want to discuss today is the update of the model independent analysis of new physics in BDKs, where, as Joel already mentioned, several intriguing tensions have emerged in recent years in various experiments. And a few weeks ago, you haven't showed your screen, I think. We cannot see the slide. Thank you very much. After all the testing. Yes, but it's OK. OK, here we go. You can see it? We can see it. Yes, perfect. Awesome. Awesome. All right, so what I was going to say is that there have been a few relevant experimental updates presented mostly at the Morial Conference in March. And in a recent analysis, we tried to see what that implies for new physics models that were built to solve these anomalies. So let me first start with a very rough overview of what these B anomalies actually are. So there are actually three separate classes of experimental measurements of observables in BDKs where tensions have emerged. The first class are the so-called B2Ks dummy mu anomalies. And those have to do with flavor-changing neutral current transition, so B quark going to a strange quark into muons, which is loop suppressed in the standard model and thus particularly sensitive to new physics. However, in these observables, and there are sizable hadronic uncertainties. And actually, they are comparable to the current experimental uncertainties. For instance, there's this famous P5 prime observable that I will talk about later, and that is shown in this plot. The second class of anomalies or discrepancies are the so-called RK and RK star anomalies. Those have been observed in leptin flavor universality tests in B2SLL transitions, so also flavor-changing neutral current transition. And they have the advantage that they are theoretically very clean. However, in this case, the statistical significance of the measurements is not very large yet. And the third class of observables is a quite different process at first sight because it's a flavor-changing charge current, which is not a rare decay, namely the B2 charm transition. And also here, there is an apparent violation of leptin flavor universality when comparing decays with tau versus decays with muons in the final state. This again is theoretically very clean because it's a leptin flavor universality test. And I'll try to demonstrate later that even though this seems like a very different process, if it's really due to new physics, it might have a common origin with the neutral current anomalies. And the rough outline of the seminar is like this. So I first want to discuss each of these three classes of discrepancies in turn with varying level of details. And in the end, or next, I want to discuss combined explanations. First, on a completely model-independent basis in the framework of the standard model effective field theory. And then just one short example of a simplified dynamical new physics model that one can write down that realizes this EFT scenario. And in all of this, it turns out that there is a very useful and powerful tool, which I call the global SMEFT likelihood. And I will describe what I mean by that in detail. And this is what I want to discuss in the last part of the talk. All right. But let's start with discussing this beta case dummy mu anomalies in flavor-changing neutral currents. So flavor-changing neutral currents, as I already said, are loop suppressed in the standard model because of the GIM mechanism. They are also additionally suppressed by small C cams elements, so they are very sensitive to physics beyond the standard model. And in fact, these are the kinds of BD case where before the start of the LHC, one would have expected new physics to show up first, if anything. And there are several exclusive decay modes, so decay modes with a specific hadronic final state that are measured at the LHC, namely B2K star mu plus mu minus, B2K mu plus mu minus, also B sub S2 phi mu plus mu minus, and lambda B2 lambda mu plus mu minus. They are all based on the same quark level transition B2S mu mu. And B2K star mu plus mu minus is special because it is actually a four-body decay. The K star further decays to a K on and a pi on with almost 100% branching ratio. And in this four-body decay, one can measure various angular observables by forming angular coefficients with these kinematical angles between the various decay planes that I show in this sketch. And all of these angular coefficients, so there's 12 in full generality, are sensitive to new physics in complementary ways. Moreover, with these angular coefficients, you can form ratios of observables where, again, many theoretical as well as experimental uncertainties cancel. So by a global angular analysis of this decay, you can actually quite constrain new physics quite well. And already almost six years ago in 2013, when LHCB presented the first analysis with one inverse femtobun of data of this decay mode, the so-called P5 prime anomaly was observed, which was a significant deviation in this angular observable P5 prime, which is one of these 12 observables, in the low dilapton invariant mass region where q squared is the dilapton invariant mass. And this is shown in the left hand plot here. Then two years later, when LHCB analyzed the full run one data set, this tension was actually confirmed. So I stretched the plot a bit to be on the same scale as the left-hand side. And as you can see, the data points moved a little bit closer to the standard model, but also the uncertainties reduced a lot. And so the size of the tension even grew a little bit. There are actually more deviations in exclusive B2S mu transitions. For instance, there seems to be a suppression of many exclusive B2S mu branching ratio. So in these plots, I just show a few examples. B2K mu mu with a charged or a neutral B meson, B sub S to phi mu mu, and B0 to k star 0 mu mu. And in all cases, you see that especially the blue data points, which are the LHCB measurements, seem to lie systematically below the standard model predictions. However, one has to keep in mind that computing the standard model predictions that one then compares to the experimental measurements is not trivial because we're actually computing process or we're interested in short-distance physics between quarks and leptons. But what we are observing are observables involving hadrons. And so we need to compute the hydronic matrix elements of these semi-leptonic operators between the mesons in the initial and final state. And this requires non-perturbative methods. So on the one hand, one requires the hydronic form factors of this heavy to light meson transitions for which one can use either lattice QCD or non-perturbative models such as light consumables. And it's actually good that we have such complementary methods because they turn out to be valid in different kinematical regions, namely an opposite size of the Q squared spectrum. So when you combine them, as we have done, for instance, in this paper here, you can check the consistency between them. And that doesn't seem to be any inconsistencies here. However, even if we would know the form factors to infinite precision, which of course we don't, we would still have sizable hydronic uncertainties in this decay. And this is because there are also non-factorizable hydronic effects that originate from diagrams such as this one, where you have a purely hydronic transition here. This is a quark loop radiating off a photon, which then splits into a lepton pair. And this kind of transition cannot be expressed in terms of form factors, especially when the gluons connecting this loop and the mesons here are soft. And at first sight, one may think, OK, but this is a loop correction. Shouldn't this be a small effect? But this is actually not the case because this local operator here, for instance, if this is a charm quark, can be generated at 311 in this animal because it would be a charge current, like WBC coupling and WSC coupling are both allowed at tree level, while the upper diagram is only allowed at loop level in the standard model because it's a flavor change neutral current. So it turns out that this contribution could, in principle, be sizable. And actually, it can be partly calculated in, for instance, in QCD factorization. This was derived a long time ago. However, the remaining contribution, which is incalculable, is in the end a sizable contribution to the error budget. But recently, there has been renewed interest in this, of course, because of the anomalies. And it was also shown in this paper that one can use experimental data or non-laptonic decays to constrain this contribution. But also in this case, that does not seem to be any smoking gun of something that has been fundamentally wrong in the theory prediction. So we still haven't understood at all where this tension comes from. So let's assume for the moment that the tension is not due to underestimated hadronic effects and not due to some experimental systematics, but that it's really due to new physics. That this works is actually also non-trivial on a purely model-independent basis because there are so many observables in so many different exclusive decays that are sensitive to the same kind of short-distance physics that one has to perform a global analysis to see if a new physics explanation is actually able to explain all of the observables simultaneously. Unfortunately, this can be done in a completely model independent way by using effective field series. So we express all new physics contributions as modifications of the Wilson coefficients of some local operators, so local interactions between standard model fields. And here I want to focus on just two of the operators, or let's say four of the operators because they will play the main role. But in total, there's just a handful that can actually be relevant in this process. So the four I want to focus on are those 09010 and their prime counterparts, which correspond to left or right-handed vector currents on the flavor-changing quark transition and vector or axial vector laptop current. What you can now do is to express all of the observables in terms of the Wilson coefficients of these operators and then fit the Wilson coefficients to the data. And this was done for this B2SVmute data in this paper here last year. And this is a table from this paper and it shows the best fit values of the new physics contribution to those various Wilson coefficients. So 0 would be the standard model value. And what is also shown are the allowed ranges at 1 and 2 sigma and the poles, where the pole is simply defined as the distance of the best fit point from the origin in units of these standard deviations. And what you can see is, for instance, that this Wilson coefficient of the operator 09 has a best fit value of minus 1 with a pole of 5 sigma. What does that mean 5 sigma? It doesn't mean that we have discovered new physics because this is not a statistical uncertainty of a single random measurement. But it does mean that the solution with new physics in C9 fits to the data much better than the standard model. There are also other solutions, for instance, if there is a new physics effect in this 09 and 010 but with opposite sign and the best fit point being roughly half of that for C9 only, this also has a pole of almost 5 sigma. And then there are other solutions, like, for instance, these prime tools and coefficients, which have a very low pole, meaning that they do not improve the agreement of the data with the standard model significantly. You can also look at two-dimensional fits, for instance, here in the plane of C9 and C10. And what is interesting in this plot is that the contribution from the branching ratio measurements, which, as I showed you, seem to show systematic suppression compared to the standard model, give you this orange banana and the LHCB measurements of B2K-style mu-mu-angular observables give you this purple ellipse here. And you can see that both of these constraints deviate from the origin, and they both intersect in the same region. So it all seems to be consistent with this simple new physics explanation. Now, the next step is to look at observables where we test violation of leptin-flavium universality in B2SLL transition. And here, the point is that, actually, when you try to write down a dynamical new physics model, which gives you a new physics effect in C9 or C10, it turns out that if your new physics couples democratically to electrons and muons, it's quite difficult to evade bounds from lab, for instance, E plus and minus collisions. And so already before the measurements of these leptin-flavium universality tests, and people have written down models when your physics only couples to muons, but then you can look at ratios of flavor-changing neutral current decays with muons or electrons in the final state, which in the standard model have to be very close to 1 because of leptin-flavium universality. And you would find that they have to deviate from 1. So in this paper where we did this fit, for instance, we predicted, using this data on B2SL mu, what the violation in these leptin-flavium universality observables would be. And we found, as was also found by other people before us, actually, that one would predict a roughly 20% to 25% suppression of all of these ratios in various decays, like B2K mu and B2K semi-mu, these are best to find mu. And in fact, already in 2014, LHCB had measured this ratio RK of B2K mu over B2K EE to be 25% below the standard model, at least looking at the central value, with a significance of 2.4 sigma. And then three years later, LHCB also measured the same ratio in B2K-SL transition. And also there they seem to find a suppression of the standard model value, which is close to 1 by roughly 25% or 30%. So this was quite tantalizing. And now this is why the update of the LHCB measurement of RK was awaited with much anticipation. And it has been finally presented at the Morio conference. Just before that, surprisingly, actually, the Bell experiment also presented an update of their old RK star measurement, which I show here for completeness, even though the limited statistical precision means that it doesn't have much impact on the fit. So this is actually compatible with the standard model or with new physics. It doesn't tell you much. So let's focus on the LHCB result on RK. The new result that includes, oh, sorry, this was supposed to be written down here. So the updated LHCB measurement of RK with 1.3 of the run-to data gives roughly a 15% suppression of the standard model value, again with significance of 2 and 1 half sigma. Actually, this is a combination of the run-one and run-to data. So the run-one data alone was re-analyzed. And the value they found was actually 0.7, while the run-to data alone is actually compatible with the standard model at 1 sigma, but it's also a little bit below. So combining these two values, one gets these 85%. So of course, one can now combine this information and, again, do a global fit in the effective field theory. And this is what we did. It was also done by other collaborations that are listed down here. And the results are compatible. There are a few small difference here and there, which we have already discussed and which are well understood. If you're interested, you can ask me about more details. And so what we found was that combining the B2SLL data and the new RK, RK star data, again, a solution in the Wilson coefficient C9 with only muons gives a very high significance, a pull of roughly 6 sigma. And interestingly, we also found that solution with new physics only in the Wilson coefficient C10 now gives a pull in excess of 5 sigma. This is actually partly driven by a new measurement of the branching ratio of B sub S to mu plus mu minus by the Atlas experiment. And this solution with C9 equal to minus C10 is now actually the most famous one. This is interesting because it corresponds to a left Dirac structure of the operator, which is predicted by many dynamical models and that people have proposed to explain these anomalies. Again, you can look at the two-dimensional likelihood in the plane of C9 versus C10. And here in this plot, the B2S mu block, the orange block is actually the fit that I've showed you before because it only used the B2S mu data not RK and RK star. And RK and RK star themselves give you this blue banana. And again, it's very interesting that both classes of constraints individually deviate from the standard model and both intersect in a point. So this is actually the situation before the RK and RK star update at Monion. Adding this RK and RK star update and the contours move a little bit and they no longer intersect perfectly but since these are only one Sigma contours here, this doesn't matter much. So the significance stays roughly the same and the global fit here moves a little bit closer to the standard model but with smaller uncertainties. And it's at one Sigma perfectly compatible with the C9 equal to minus C10 line, which is shown here. What is also interesting because it's realized in specific new physics models is to compare what one gets with new physics that couples only to muons versus new physics that couples universally to electrons and muons. And this is shown in this plot. So the vertical axis is a solution where one has a purely muonic C9 equals to minus C10. So purely left-handed operator and on the horizontal axis, one has C9. So a vector-like coupling to leptons but which is lepton-flavoured universal. So which is the same for electrons and muons. The reason why we're showing C9 here and not C9 equals to minus C10 is because the photon which couples lepton-flavoured universally has a vector-like coupling to leptons. And this is actually what happens in many models that one has a photon mediated effect. Now before Maureen, as you can see here, the global fit was perfectly compatible with the solution in which this universal lepton-flavoured universal contribution was zero and you had a purely muonic effect. Now after Maureen with the data moving slide for RK and RK star moving slightly closer to the standard model, actually there is a slight, even though not very significant preference for a non-zero lepton-flavoured universal effect to explain the larger new physics effect in the B2S mu-mu transitions compared to RK and RK star. All right, and now I want to be very brief in mentioning these anomalies in the charge current BDKs, namely B2C tanu. As I already mentioned in the beginning, those decays are very different from the B2S LL transitions because they are not rare. On the contrary, those are actually the least rare BDKs because B going to Charm is the least suppressed way how a B-corp can decay. And actually the B2D LNu and D star LNu decay where the D and D star meson contain a Charmcorp with light leptons, electrons and muons are actually the most precise way to measure the VCB element of the CKM matrix in the standard model. But with a tau in the final state, this is much more challenging experimentally because the tau decays with missing energy and it's not visible in the detector directly. And so the measurement is much less precise. However, forming the ratio of the tau on a decay to the light lepton decay is very interesting because again it probes lepton-flavial universality. In this case, it's a bit different from the electron muon universality because the tau of course has a mass which is not negligible compared to the meson masses. It's actually heavier than the Charmcorp. So this ratio is not equal to one but still it can be predicted very precisely in the standard model. And the experimental results roughly one year ago was such that there were many measurements by the Barbar experiment and the Bell experiment and also LHCB of these two ratios, RD and RD star. And what you can see here, the red combination is the global world average and the little blue circle here is the standard model prediction with uncertainties. And as you can see, the measured value is almost one third larger than the predicted value and the significance of this was 3.9 sigma. Now, actually also at the Monio conference, a new measurement by the Bell experiment was released which is shown here in green. And as you can see, it's much closer to the standard model, even though it doesn't agree at one sigma with the standard model prediction, it's much, much closer. And so the global average, which is this red ellipse here is now merely three sigma away from the standard model. And actually you can also see that there's a significant tension between this new Bell result and the old Barbar result. So the significance of this tension is becoming smaller but it's still not negligible. Now, the question is, could those tensions in neutral and charge currents actually be related? And the reason why they could actually be related can be seen if you write down the operators in terms of left and right-handed fields. So the C9 equal minus C10 was one of the preferred solutions as I showed you. And if you look at the corresponding operator, it corresponds to one with all left-handed quarks and leptons. Now, this operator is actually only related by SU2 and flavor rotation to the operator which generates the B2C transition in the standard model, which also involves left-handed quarks and leptons. So you can already see that it should not be too hard to write down a model which gives you both. However, the suppression scales for these kinds of operators would have to be very different because in the case of the muons, it competes with a loop level standard model effect. So the suppression scale can be quite high of the order of 20 or 30 TV. While in the case of the B2C transition, one has to compete with a standard model tree-level process. So it would have to be much lighter in your physics. Now, writing down such a combined explanation, it turns out to be very instructive not to start with a full-fledged model which can be very complicated but to proceed in layers. So what I discussed until now was the effective field theory, valedad lower energies, which is also sometimes called a weak effective theory. In this case, those sectors B2S mu and B2C tan mu are completely independent. But one can also go one level higher and look at the extension of the standard model itself by local operators. And this is what is called the standard model effective field theory. This is a more restricted EFT because those operators have to be invariant under the full standard model gauge symmetry. And working in this EFT one also obtains relations between for instance, B2C transitions and B2S transitions. And going yet one level higher, one can write down simplified models which with just single new mediators or single new couplings that one can try to generate these operators in the EFT. And then one can also find correlations with direct searches or derived constraints from direct searches in high PT processes. And this interpretation in the standard model EFT is actually quite straightforward because there are just two operators which match almost in a trivial way on the operators I've used so far. So oops, sorry. So those are these two semi-deptonic operators involving lepton doublets and co-op doublets within singlet or triplet SU2 structure. And then the sum of them matches onto C9 equal to minus C10. And the triplet operator with the tauonic indices, so lepton doublets from the third generation directly contributes to B2C tunnel. However, since these are SU2 invariant operators, you also unavoidably generate a B2S tau-tau transition. And while this has not been measured with any reasonable precision yet because tau's are very difficult as I already mentioned, this is very interesting because it leads to an important loop effect. And this is shown here. So it was shown already a long time ago but then somehow has been forgotten in this context that if you have a large contribution to the B2S tau-tau transition then you can get a loop effect where you close this tau to a loop radiator of a photon and attach two light leptons. And what this means is that you get the lepton flavor universal contribution to the Wilson coefficient C9. And funnily enough, if you start with this Maft-Wilson coefficient that you need for RD and RE star, then you generate an effect in B2S tau-tau which has the right sign and rough size to explain the B2S new data just with this loop induced effect. And this is a very funny coincidence which I think is intriguing. However, of course you cannot explain RK and RK star because this is a lepton flavor universal effect. So what you can now do, just as I showed the two-dimensional Wilson coefficient plots in the weak effective theory earlier, you can now look at the likelihood in the space of Maft-Wilson coefficients. And so here I'm showing on the horizontal axis the Maft-Wilson coefficients with tau-onic indices, three, three, and on the vertical axis, the ones with mu-onic indices, two, two. And then I'm showing the constraints from RK and RK star, this is blue and is of course only sensitive to the mu-onic Wilson coefficient. RD and RD star is in green is only sensitive to the tau-onic ones. And the B2S mu-transitions, one would naively think that they are also only sensitive to the mu-onic ones, but because of this loop induced effect here, they're actually sensitive to a linear combination of the two. And before Maureen, as you can see, the combination of the yellow and the blue constraints or the flavor change in neutral current processes were compatible with the solution where this tau-onic Wilson coefficient was zero, namely here. Now, after Maureen with this change in the RK and RK star measurement, funnily enough, the data moved in a way where now the yellow and the blue overlap in a region, namely here, where the tau-onic Wilson coefficient is preferred to be different from zero. And interestingly, it's preferred to be exactly in the region where it would explain RD and RD star. And actually, I have to apologize that this plot does not include the updated measurement of RD and RD star from the bell collaboration yet, but I can tell you that since this moves the value closer to the standard model, the green band would actually end up somewhere here, and so it would actually fit even better with the yellow and blue bands. Okay, concerning dynamical models, I want to be very brief. I just want to mention that there are in principle three possibilities, Z prime or W prime in the S channel, laptop walk in the T channel or loop-induced effects. But loop-induced effects are typically too small for the charge current anomalies because you compete with a tree-level standard model effect. And in fact, if you take into account all direct and indirect constraints, almost the only single mediator model that survives is a laptop walk model, which is called the U1 vector laptop walk. So it's a spin one laptop walk with a standard model gauge quantum numbers, three, one and type of charge two-third. And this is interesting because it's actually the minimal implementation of this scenario that I showed you in this Wilson coefficient plot because if you switch on the coupling of this laptop walk to left-handed quark and laptop doublets with the appropriate generation indices, then you can generate exactly these Wilson coefficient combinations and nothing else. And then you can look at the likelihood, not in the space of Wilson coefficients, but in the space of couplings. And these are the Tauanic couplings with second or third generation quarks. And you see Rd and Rd star refers this region. Then you get some constraints, for instance, leptonic Tau decays exclude this region, B2S gamma excludes that region. And so the preferred values are actually around here. So we selected a benchmark point here. And then for this benchmark point of these Tauanic couplings, we show the Muonic couplings in this plane. And then you see this is the region where Rk, Rk star and B2S Mu can be explained and there are no other relevant constraints. What is very interesting is that you get, you get various constraints from sectors which in the EFT are completely independent, for instance, lepton flavor violating Tau decays or lepton flavor violating B decays such as this. Okay, so this brings me to the last part of my talk and this is the global sniffed likelihood. Let me be a bit quick here. Let me just tell you what I mean by this. So sniffed as I already told you is the extension of the standard model by all possible local interactions so high dimensional operators. And it's a very powerful tool to analyze new physics in my opinion, because it allows to model independently parametrize not only flavor physics, but also electric and Higgs physics, top physics and also high PT processes at the LHC. And the assumptions that go into it are actually very weak because it only assumes that the scale of the new physics is much higher than the electric scale, which actually seems to be true because we haven't found any new particles at the LHC. Now, what do I mean by a sniffed likelihood? Well, this is what this often done, for instance, in Higgs or electrics physics, you just take some observables, express them in terms of these sniffed worlds and coefficients and fit them to the data just like I've done for B physics in the first part of the talk. However, the problem is that you are actually not allowed to do this separately for each sector and not care about the other sectors because if you have processes at different energy scales, the renormalization group mixes the Wilson coefficients of these different sectors. And so you're actually forced to consider the global likelihood in Wilson coefficient space and you are also forced to take into account renormalization group effects. And so this is technically quite challenging but why is it useful? I think it's very useful, first of all, for model independent analysis as I've already discussed for the example of the B anomalies, but much more importantly, I think it's useful because it simplifies a lot of new physics analysis of dynamical models. So let me take the B anomalies as an example. So this U1 laptop work model that I've showed you, actually it took several years for people to realize the various constraints that you get from different sectors. For example, this loop induced effect that comes from the semi-tauonic operators, the laptop flavor violating decays and so on and so forth. But these are all things that in principle, if you had this completely general likelihood function in Smith-Wilson coefficient space would be almost instant to derive because you just have to compute the Wilson coefficients which for this laptop work model is a tree-level calculation, so it's trivial. And then you compute the renormalization group and you compute all the observables and it would tell you which the relevant constraints are. And this is why we started to build such a global likelihood and for this we needed several ingredients. So first of all, we needed an exchange format for this huge number of Wilson coefficients. We needed to implement the renormalization group running for all the Wilson coefficients and also compute all the different observables. And basically we now have a public code for each of these three problems. The first one is called the Wilson coefficient exchange format. The second is called Wilson and the third one is called Flavio. And so based on these different tools with a few collaborators, I've recently published another public code which is called Smelly which is the acronym for the smeft likelihood which provides at least a partially, let's say a partially global likelihood function in smeft Wilson coefficient space which already includes 265 observables, not just B physics, but also laptop flavor validation and electric precision tests. And we're actually working on extending that to many more processes, for instance, Higgs and top physics. All right, so since time is basically up, let me just skip these slides of how to use the package. We have a documentation on this GitHub site that are linked here, you can just click on it and you can play around with the package and maybe it will turn out to be useful in a future project of yours. And let me also stress that since this is an open source project and since there are many more observables in addition to these 265 that probe the smeft Wilson coefficient space, we would be very happy about contributions from the community to either of these packages that go in here. All right, so let me conclude. So in the recent years, several significant deviations in B physics have been observed, angular observables in B2S and immunity case and hints for laptop flavor universality violation in neutral currents with nuance and in charge currents with Tauss. And it turns out that it's actually fairly simple to simultaneously describe all of these deviations in terms of a very simple hypothesis within EFT. So just two Wilson coefficients essentially. However, new measurements presented at Morion, a little bit closer to the standard model, but the significance is still very large. Fortunately, we won't have to wait forever until the question whether this is really new physics or not will be resolved because there will be several measurements that we can expect in the next two or three years that I think we'll be able to resolve this once and for all. First of all, LHCB has shown the result only with a third of the run to dataset and they already have much more on tape and I guess we will learn at some point what they have. And then also there's a new experiment which is complimentary the Bell 2 Super B factory which is an E plus E minus machine so it's very different systematics and it's much cleaner for some kinds of observables and also this which just started taking data will give new information. Finally, I think this whole story of the B anomalies whether they're really due to new physics or they will eventually go away demonstrates how important this global smet likelihood is because it would drastically simplify analysis of new physics models. And we have started building such a global likelihood as an open source package and I encourage you to try it out and we would also be very happy about contributions to it. Thanks a lot. Okay, thank you very much. David, it's been a very, very clear talk. So let's see, I don't know if there are any questions in the audience. So, okay, let me begin. So first regarding Smelly, sorry. So first of all, all the slides that you skipped should be available on our webpage so don't worry people will have access to them. Right, so my question regarding Smelly is will it be compatible with the fine rules, Sarah, Sfino, all of these other tools for implementing models? Right. So I think that's a very important point and this was actually one of the main motivations for coming up with this Wilson coefficient exchange format because the idea there was to define some common conventions that allow different code developers to have interfaces. So actually the authors of this WCXF papers are developers of 10 different codes already. So for instance, Avelino Vicente and Florian Stau from FlavorKit are part of it. And so these 10 different codes already implement this exchange format. And so the likelihood function itself is just a function that maps from Wilson coefficients to a likelihood, which is essentially a number. But then, yeah, as you point out, you can use other codes that compute the Wilson coefficient in specific new physics model. And using this exchange format, you can then interface it. It's also supported, for instance, by the Smeft Fine Rules model called Smeft Sim, developed by Michael Trott and collaborators, which allows you to generate events in the Smeft with MatGraph. So this is also using the same exchange format. There are still a couple of codes on the market which do not use it, but I hope that at some point will be supported by all the codes. That's great. That's great. I have a couple of other questions, but let's see if anybody else on the audience would like to ask something. But I don't get to ask everything. Okay, so they're giving me a free ring to go on. So first of all, you also commented that there was this tau loop effect on RD, RD star that could also take into account the BS mu mu anomaly. So then my question is, is it possible then to introduce new physics for RD star on the two electron case instead of making it a muon situation, making it an electron contribution? Yes. In principle, that's possible. Yes, so you could have a tau induced leptomphib universal effect taking care of the P5 prime anomaly and an electronic effect taking care of RK and RK star. Yes, that's possible. And have you done a global fit on this? We haven't made a plot on this possibility, but yeah, since basically all our plots that we show in the paper are just two-dimensional contour plots of this likelihood function which is provided by the smelly package. In principle, it's actually trivial for anyone who has a little Python experience to make this plot. So, and I think, yeah, this is very interesting. In fact, a motivation to make all of this open source was also that already in the past, in the past after this P5 prime anomaly is emerged, many people came to us with questions, oh, what about this scenario and that scenario where you have this and that relation between Wilson coefficients? And I think it's impossible to give the full information that is required for model building in a paper because it's just too multi-dimensional. So, and this is another interesting example which one can explore. And in principle, the codes have all the information that one needs to study this. Okay, great, great. And what was the other question I had? Okay, so fantastic, fantastic this solution. So, I'm not sure if you're aware of... So you showed us the layers, right? Going from the top where the simplified models at the bottom was a weak effective theory, but there's one more layer above which is not a simplified model, so it's a larger, like a gut-inspired model or something like that, so what do you have anything to say regarding these situations? Have you seen if any model is particularly adequate for giving these solutions? I mean, as you already mentioned, the party salam has received lots of attention because for this one laptop clock, it's kind of natural because it emerges as a gauge boson of the party salam gauge group. But then the models that were constructed by Gino Zidori and others, of course, when you try to build a full model, it gets a bit complicated because you have to avoid additional constraints that you get. But I think it's certainly a useful exercise because, for instance, in their model, they found that it was actually not possible or not easily possible to only generate these vector boson coefficients, but because the party salam group has other gauge bosons as well, they always generated, for instance, additional scalar operators and things like that. So those, I think, are interesting lessons that you can draw from model building that you would not see in a simplified model. So I think it's really, it's certainly important that one tries to build full models which embed these simplified models, which often are, you know, a vector laptop clock by itself is not really well defined because if it's a massive vector, it has to come from a gauge group or something. So, but yeah, I mean, so far I would say there is, at least to my taste, there is no compelling model where you would say, oh, this is so beautiful that it has to be true. But yeah, I mean, we have to look for more data. Okay, great, great. Okay, now, so I'm not gonna let anybody ask any questions. So I have one final question before I let you go because I didn't really understand in the RK and RK star fit, you put some universal coefficients and some notes, so I didn't really understand the difference. I don't know if you can give a bit more information. Right, maybe if I open the plot again and make sense. So basically the question behind this plot was the following. So what if RK and RK star are just statistical fluctuations and go away and, but this P5 prime cannot really be or is very unlikely to be a statistical fluctuation because it's very significant. So if it is not new physics, then it has to be some systematic effect. So the question was, what if RK, RK star go away? Then basically we would need a laptop for every universal contribution to C9 to explain the P5 prime anomalies. And so this plot was basically meant to show which of these two solutions is actually preferred at the moment. So RK and RK star of course only probe the vertical direction and they're moving closer to the standard model, but the yellow ellipse is compatible with both and it didn't move because there was no update in Mongia. And so now the combined one seems to favor a slightly non-zero universal contribution. And now you would at first sight maybe say, okay, this is a bit ugly. How can I build a model which is laptop-flavored universality violating but only a little bit. But then when you look at this radiative effect with the towel, you see that actually in the laptop model this is exactly what you get for free basically. So that's why we thought it's interesting to show. But of course it could be a coincidence, but that's basically the motivation for the plot. I see, I see. Okay, great, great. So let's see, have we got any questions on the YouTube chat? No questions, oh my God. It seems it's been perfectly clear. I don't know, maybe Roberto has just unmuted himself. Will we have a question or maybe if you can comment how, I mean, potentially how long can take to make this such a long best fit scenario so we get the likelihood with several observable with your code. Yes, so at the moment, computing a single point in Wilson coefficient space takes roughly five seconds on my laptop, which is, yeah, I mean, it's not as fast as I would like it to be. But the main reason for that is that there are so many observables in PDKs which are measured in bins of Q squared. And basically to compute them, you have to integrate numerically the differential rate in the bin. And this just turns out to be very costly. So if it weren't for that, so for instance, if you only look at the electric precision data, it takes much less than a second, like a 10th of a second or so. So yeah, but for the plots we made, which contains everything, it was roughly five seconds per point. So for a plot, roughly 400 points for a plot. So then if you can parallelize it, but I mean, in the end, so we had to complete the full numerics for the paper in an afternoon break of the Morioz session because the data was presented in the morning and our results were presented in the evening. So that's the time scale for the plots. Let's see. Okay, thank you. That was awesome. I'm having to do that, please, within my day. Okay. I mean, to work under pressure. Sorry? That means to work under pressure. So no skiing on that day. So terrible. Okay, great. I don't know if there's any other question. Let me check the YouTube chat. No, everything's okay. So, okay, that's it. Thank you, David, for the wonderful webinar. And okay, we'll log out and we'll see you soon in a couple of weeks. Thanks a lot. Bye-bye.