 In this example we're going to look at the pressure distribution with a k-none as we've written this here. We want to find the maximum likelihood estimator for alpha. And we've got some constraints here. But we're going to set out and do the maximum likelihood estimation in the traditional way. So maximum likelihood, so we write down our likelihood. So the likelihood of alpha given our data expression is the product from i equals 1 to n of alpha times k to the alpha over xi plus 1. These bits don't depend on i, so we can take them outside the product and we say it's alpha to the n, k to the n alpha times the product from i equals 1 to n of xi to the power of minus alpha plus 1. So it's just rewriting what we already have. And then we get the log likelihood because it's much easier to maximize the log likelihood of the natural log of alpha of the likelihood function equals alpha equals n natural log of alpha plus n alpha times the natural log of k, which is a constant remember. And then we're going to bring in our plus, or sorry, we say minus alpha plus 1 times the natural log of the product of i equals 1 to n xi. And we'll worry about that bit later. Then we get our deal, we maximize it with respect to alpha. So that is n over alpha plus n the natural log of k and then minus the natural log of that product of i equals 1 to n xi. And we can simplify that by writing it's n over alpha plus n the natural log of k minus the sum from i equals 1 to n natural log of xi. And then we roll with the natural log of k and p is the natural log of k plus the natural log of p. Then we equate to zero. And remember when we equate to zero we put half of this. And then we say the alpha equals zero implies n over alpha plus n the natural log of k minus the sum i equals 1 to n natural log of xi equals zero. We want to solve for alpha. So we start by solving for n over alpha equals the sum from i equals 1 to n the natural log of xi minus n the natural log of k. Divide through by n at 1 over alpha half the natural log of xi. We want n minus n the natural log of k divided by n. Now you could go, I'll make this easier and I'll simplify and I'll divide through by n. But because you have this in terms of 1 over alpha here you need to invert it anyway so there's no point in the alpha half here equal to n over the sum of i equals 1 to n the natural log minus n the natural log of k divided by n. And that is how to find the maximum likelihood estimator of a greater distribution with k is known.