 Si minutes equation models is a modern technique for estimating simultaneous equation model. For many people the term structural equation modeling or SCM is synonym for maximum likelihood estimation of this kind. This technique differs from the older econometrics techniques in that it requires numerical optimization. So there is no closed form solution, no single equation that we can apply to estimate a model. Instaan, mitä uskoimme, on, että meillä on jotain alkuun. Meillä on yksinkertainen, esimerkiksi meidän tukaiset, ja me sivunne Leaston ja Maru-Parameterin alkuun alkuun. Sen aikaa me kallikulaisimme modelissa alkuun jälkeen. Perkuun, kuten olemme lisäksi, olemme luottaneet kylmääräjärjestelmiä ja se tarkoitetaan alkuun jälkeen emääräjärjestelmiä minkälaista tämä kylmääräjärjestelmä on. In practice, computer applies matrix algebra, but conceptually we can think of this as being like an application of the tracing rules. Then the estimation criterion is that we need to make these model implied covariances here as close as possible to the sample covariances. So we adjust the model parameters so that the implied covariances matrix is as close as possible to the sample covariances matrix. For just identified models, we can get them to be exactly equal because just identification basically means that all information we have is required for estimation so there is no excess. For over identified models, which most useful models are, we require a more precise estimation rule because computer does not know what as close as possible means so we need an operational definition. There are a couple of ways of defining as close as possible, which lead to different estimation techniques, but in maximum likelihood estimation the as close as possible refers to finding an implied covariances matrix such that the observed covariances matrix would be a likely observation if the data were multivariate normal. So there's a multivariate normality assumption here and then we calculate what is the likelihood of getting a sample with this particular covarious matrix if this is our population covarious matrix and the data multivariate norm. And then computer adjusts these error variance estimates and these rigorous estimates here to find that maximum of the likelihood. So that's simple basically this is fitting to covariances matrices and there are other techniques used the same principle but maximum likelihood is the most commonly used technique among these. So how does this technique compare against the GMM, the other modern alternative. There are maximum likelihood is more efficient, the reason why it is more efficient that we introduce the multivariate normality assumption with GMM does not make so every time when you introduce assumptions you generally gain efficiency. It is also more flexible and for example maximum likelihood estimation can be used to estimate this model here and this cannot be estimated with GMM. The reason why this fails with GMM or here I'm using our seemingly unrelated regressions why it fails or three days least because I mean why it fails is that GMM and other techniques in that family assume that these error terms are uncorrelated. With maximum likelihood estimation if we have a good reason to do so we can constrain these two error terms to be uncorrelated which identifies the model and it allows us more flexibility. Of course constraining two quantities that we don't know the unknown causes to be uncorrelated is a rather strong assumption that should be justified based on theory but it is possible to do. Maximum likelihood estimation is more computational and challenging whereas in GMM you always get estimates because it's just a straightforward application of matrix algebra in maximum likelihood estimation computer has to interactivity find a solution and these numerical optimization techniques can sometimes fail. There is the multivariate normality assumption but that can be relaxed by applying other techniques from the same family that use a different measure for as close as possible for the fit data matrix and observed matrix such as the ADF estimator. And there are also robust statistics that can be applied and correct statistics that can be applied if the violation of multivariate normality is a big concern with the data. I will not go into detail on the normality assumption but it is useful to understand at this point that if we decide between GMM and ML then ML adds an assumption that GMM does not make.