 Let's take a look at how I troubleshoot non-converting models. So what is my workflow? A model can fail to converge in multiple different ways and it is useful to understand different techniques for troubleshooting. But when you start troubleshooting you have to start with one technique followed by another technique. So which one of these techniques that I've covered here or in this set of errors I would go for first then which ones are the kind of like last resort techniques. It really boils down to two different factors. One is how easy the diagnostic is to implement and the second is how likely the diagnostic is to give me some useful information about a particular problem. We also need to understand that the workflow differs between different contexts and the most important context is this is this something that I've been working on my model or is it perhaps a graduate student who's just starting doing structurally some modeling or some other kind of statistical modeling and comes to me with a model that doesn't work. I do different kinds of models than students and I also do different kinds of errors than students. For example I rarely specify models that are not identified because I've seen so many models that I know in the specification stage that some models can be estimated whereas others can. Someone with less experience might have an identification issue so quite often these problematic models from beginners are actually models that can be estimated because they're not identified and these are my problems tend to be more about data and estimation compared to models and that kind of background information tells me what to take a look at first. Another context is what information is already there is there some kind of warning does the warning relate to identification or does it relate to computational problem and then is there any output if there's output are the standard errors are the estimates so what is already available what is the background of the model these affect the workflow but let's just take a look at the general workflow that I might apply when there is a student that comes to me with a model that doesn't work. The first thing that I would do is to read and understand the warning and there's if I don't haven't seen the warning before I would take a look at my statistical softwares user manual that explains the problem I would put it to Google and see what I find sometimes there are the warning is something that even if someone explains it to me it might be more difficult it might be difficult to understand so there's a trade off do you want to spend a week trying to understand what a warning means or do you want to proceed with the diagnostics this of course depends on your mileage if you have lots of experience then warnings are more useful than if you have very little experience like if you have no idea what Hessian matrix is then a warning like Hessian is not positively definite is not a very useful thing for you to start looking at so the next thing is that I would check the standard errors if there is a missing standard error or standard errors that is very large that is an indication of an identification problem and identification problems are more severe than computational problems because if a model is not identified then it cannot be estimated no matter what or it can be with some qualifications and you really need to understand if you are what are the consequences if you decide to go with the model where some parameters are not identified but others are then I would do eyeballing I would grow a path diagram of the model and eyeball for identification are do all latent variables have scales are there any do all are correlated errors or bi-directional paths do they have proper number of instrumental variables are all factors do they have at least two loadings is it possible that two indicator factors are not identified because of not having sufficiently close association with other factors are all single indicator factors error variances picks to certain values that kind of thing so that that I know must be true for the model to be identified then I would do starting values I would print out the starting values and if there are any very large values or any values that look unreasonable I would adjust them and then just try if that gets the model to work then I would print the Hessian and gradient if the model does not converge if it does the converse then I would print the variance covariance matrix of the estimates to look if two parameter estimates are very highly correlated either positively or negatively and that indicates that there is a potential identification issue concerning those two parameters competing for the same variance or covariance in the data then I would try different optimizer so if an optimizer fails maybe some other optimizer can help this is something that is that often does not help but it is very easy to do so it's in my toolbox then there is the strategy the strategies that require a bit more work so if I can't get the model to work then I would start looking at okay so I have a there is there a simpler model that I can get to work if I have a model with let's say 10 factors I would split it into two or five factor models and then analyze estimate both separately can I get those to work if so then I can get those estimates to be starting values or I can do this model building strategy I might use a five indicators five factors if that works then I add six factor if that works I add seven factor and I try to understand the problem by looking at at which point does the model converge start to fail when I move from simple model to more complex model and finally this requires the most work it is not a lot like rocket science but requires some effort I would do empirical identification checks where I analyze the estimate the model using randomly set starting values and that will tell me which parameters are not identified and which might be identified so this is my workflow of these various tools and I adjust the workflow quite a bit depending on the background of the problem