 So here we're looking at data that comes from a Poisson distribution. So we observe our individual observations, yi, and each of them come independently from Poisson distribution that we're describing in terms of mu. So the first thing we want to think about is the bias of the Bayesian estimator. So before we create a Bayesian estimator, we need to think about what our conjugate prior is. So if we use a conjugate prior here, and we say it is a gamma distribution with parameters a and b. And we can leave a and b unspecified at the moment, they're generic. And if we're calling our updating row, we get our posterior would be would be a gamma a plus the sum of yi from i equals 1 up to n b plus n. So this distribution has a mean of a plus the sum of yi divided by b plus n. So we have the best guess at this mu. We put it as the hash as being the best guess underscore b for Bayesian is a plus the sum of yi divided by b plus n. Now we recall that the mean of a Poisson distribution is mu. So we would say the sum of yi over n, the expected value of this is equal to mu. So the expected value of the sum of yi is equal to n mu. As everything else here is a constant the expected value of mu happy is a plus n mu divided by b plus n. And the bias of mu happy is the expected value of mu happy minus mu which is equal to a plus n mu minus b plus n times mu over b plus n which of course can be simplified to a minus b mu over b plus n. So that's your bias of your Bayesian estimator of the mean of a Poisson distribution but you've used a conjugate prior. So now we need to think about the variance of this estimator. So variance of mu happy equals the variance of a plus the sum of yi over b plus n. We recall that a is just a constant so the variance of that's going to go to 0. b plus n is a constant so when we take this outside it's going to be squared. So it's equal to 1 over b plus n to be squared times the variance of the sum of yi which is equal to 1 over b plus n to be squared times n times the variance of each individual yi because these are independent of one another and the mean is the same as the variance for Poisson distribution because our data are Poisson distributed that's what you have to remember it's what's the distribution of the data here so we say that is n mu over b plus n to be squared and therefore the mean squared error of my estimator is the bias squared so a minus b mu to be squared plus my variance n mu and they have the same denominator because we'd have to square this b plus n for the bias b plus n to be squared. So that's how you calculate the mean squared error of the Bayesian estimator of the parameter of a distribution given that your data are Poisson and that you've used a conjugate prior. The corresponding one if you were using a frequentist estimator you would use mu hat f for frequentist would be the sum of y over n so the expected value of mu hat f would be equal to mu which implies the bias equals 0 and then you'd only have to think about the variance the variance of the sum from yi over n is equal to n mu you've entered those in each of them divided by n squared is equal to mu divided by n so that's how you can then compare the mean squared error of the estimators of mu hat for using a frequentist approach which is just the same as the variance here because the bias is 0 and when you're using a conjugate prior and you have a resulting posterior enclosed one