 Hey everybody, so I'm Pedro Elcocer. I am a staff data scientist at the Ave companies, and I'm the lead data scientist on the Lens Protocol. And my talk today is going to answer this question, how do I integrate different sources of reputational evidence? Reputation is something that lots and lots and lots of teams are working on right now. And I'm seeing some patterns that I think are less than ideal, and I want to sort of address that. My whole talk really is like in this slide, right? So the answer to the question I just asked is what I want you to do is represent your beliefs about your reputational signals as beta distributions, and then do Bayesian updating to combine them. And I'll sort of explain what all these things mean. OK, so this is what you should not do, right? So lots of teams what they're doing is that they're taking a weight vector and a signal vector, they're doing the dot product, and out comes your reputation, right? Another way to think about that is like this. So you have your weights, you have your signals, you multiply them up, and then you sum them up, and then out comes the reputation number. I think this is less than ideal, and I'll explain why. So information about the variance of your signals is lost when you do what I just showed. And variance is important to know, because it's a measure of quality. It's how you represent the quality of the signal. And I'll explain a little bit more about what I mean by quality in a minute. So what it means if you were throwing away the variance is that you're throwing away quality information. So now you don't know if your final reputation is coming from lots of low quality signals, or maybe a combination of high and low, or you don't know anymore. This is what weighting looks like without variance, right? It's just one number. So in this case, it's maybe like 0.8. And that's really all you get. You get this one weight. But what if you did it like this, right? So you still have this 0.8 sort of mean point estimate. But now you have confidence intervals around it, right? So the underlying distribution behind this weight is actually something kind of wide, right? This is what I would call low quality signal, versus something like this, which again is mean 0.8, but has much narrower confidence intervals. So we're much more confident about the value of this weight than the previous one. This is a high quality signal. So now if you have these two signals, one with sort of a high value, but maybe low quality, and maybe one with a low value, but high quality, what are you doing to combine these? How do you do that? And part of the answer is that you have to represent these as beta distributions. So a beta distribution is just a kind of statistical distribution which is bound between zero and one, and takes two parameters, alpha and beta. And I'll explain sort of what these parameters mean in a moment. So to do some concrete examples of some potential signals, so a really high quality signal is like, do you have a PoE app from RAVE? So this is a very difficult to Sible signal that maybe I would say maybe one out of every thousand users who have this RAVE PoE app are Sibles. They somehow Sibled it. So that's pretty high quality, versus something like having an ENS address which is harder to, it's like medium hard to Sible because you still have to pay for it. But it's, anybody can sort of get many ENS addresses. So the beta distribution comes in where the way to think about the alpha parameter is that it's the number of non-Sibles that you have in your sample. And then you fix the beta parameter to one and you sort of adjust the alpha parameter accordingly to sort of match this real intuition about one in a thousand are Sibles, or one in a hundred or one in 10 are Sibles. So then to combine them, we use Bayesian updating. And for the beta distribution, Bayesian updating is really easy. So all you have to do is just add up the alphas and betas. So to give you an example, if I was combining the three signals I just showed, I'd add up the alphas and the betas and I'd get this outcome like posterior distribution that better represents my belief about the true reputation and sort of how confident I am in the final value. But here's like a little bit of a more wild example. So say you had 50 signals and they were all low quality. So beta three one, it's very like one in four are Sibles. But if you do Bayesian updating, combine all these beta distributions, the outcome distribution is beta 150 comma 50, which is actually a pretty high quality distribution because the confidence interval is really small on it. So to sort of motivate this whole thing. So let's say you have these two outcome reputations. Again, let's say this one is like that, it has that mean, this one is like this, and has this mean, the means are identical. But this one, we're much more likely to trust than this user who might actually be kind of low potentially or maybe really high, but maybe low. So whereas in this guy, we really know that that's sort of where reputationally this user lives, which hopefully maybe has a little bit more context now and maybe it's a little bit easier to understand. What I want you to do is to represent your beliefs about reputational signals as beta distributions and then combine them using Bayesian updating. So that is essentially Bayesian inference of like seven minutes. So thank you for listening. I'm still out of breath from this altitude. My name is Pedro Alcocer, these are my contact details. I'm gonna be releasing a lot more information about this kind of approach, which I think is kind of a very useful approach. It's the one we're gonna be using for the lens protocol reputation system. So you can find me on the leaf apps at palco.lens, the bird app at Palco and then on Telegram, Palco XYZ. Thank you.