 Fractional response models are another variant of generalized linear model that is related to binary, logistic and probabilistic recursion models. Fractional data is data that is between 0 and 1 can include 0, can include 1 or could be between those two values and not include either. For example, share of employees that join a principle plan, the share of time spent working and so on. In this kind of model we are interested in explaining the expected value again like we are in all generalized linear models. One thing that you need to know at this point is that these models assume independence of observations. So for example if we have five companies that split the market then we can't explain market shares using this kind of models because the market shares are not independent. The five companies market shares must sum to 100. Also you can't model how a person decides to allocate time between working, sleeping and free time for example. So we have the independence of observations that we always have. If we don't then you need other kinds of models. How we model this kind of data that varies between 1 and 0 and can span any values between those two is that we use the S-curve models. Or we can use linear model if we are sure that the predictions don't go beyond the range of 0 and 1. So either linear model or S-curve model, more typically the S-curve model. So the relationship to probit and logic models is pretty close. The general modeling approaches for this kind of data are that we either transform the dependent variable. So we use the inverse of the S-curve and we get values that are between minus infinity to plus infinity. We need to have some workarounds if the data contains zeros and ones because you can't transform zeros and ones using the S-curve. Because the S-curve goes from very close to zero to very close to one but doesn't include zero and one. Then we apply least Guirikosen analysis with the Theterskerastit Robots standard errors and that does the job and it's not that problematic at all. As long as the predictive probabilities or the predictive fractions are not very close to zero and not very close to one. Another approach is to use the general linear model. So we use either logit or probit or some other S-curve and we have two commonly used distributions. One is the Bernoulli distribution which is the logistic and probit recursion distribution which is normally for ones and zeros only. But turns out that it actually works pretty well for fractional responses as well. I will go through that in another video. Then another approach that we can apply is the beta distribution and the beta regression is more efficient than the Bernoulli or the normal logistic regression for fractional data. If the beta distribution actually characterizes the data then we apply heteroskedasticity robust standard errors in any case. There are probably better approaches to use the Bernoulli because it's more robust. Beta R could be inconsistent unless the response actually follows the beta distribution. For that reason I will not go through the beta distribution in detail but you can understand, you need to understand that that's fairly commonly used. It's probably better to use Bernoulli. The published example about this fractional data is this article here. They are looking at how large share of patents a company controls from a patent pool. And they explain that they apply a normal generalized linear model with a logistic regression analysis and they use the Bernoulli distribution. So it's a normal logistic regression model but it actually just applied to fractional data. Which sounds like something that you shouldn't or cannot do but it actually works really well for reasons that I'll explain in another video. How these results are interpreted is exactly the same way as normal ESCAR models, logistic models. You plot the marginal effects and that's how you interpret. Don't look at the coefficients because they are difficult to interpret.