 This year I got involved with Parma, which is this research method training related group of people. And it's run by Larry Willemus, who is the former editor of Or backward research methods, and this is a team is close association with academic management. It's not, I wouldn't call it the company, but it's kind of like a non-profit organization joka on pieni yhdessä yhdistysmetoista, joista on todella työtä eri universitiivista. Olen tullut olla internaationaaleista yhdessä yhdessä yhdistysmetoista ja lärjää yhdessä yhdessä yhdessä yhdessä yhdistysmetoista Euroopista, koska Karmai on löytynyt in Texasissa, ja se on ollut yhdessä yhdessä yhdessä. Ja kun olen tullut yhdessä yhdessä yhdessä yhdessä yhdessä yhdessä, jota olen halunnut tosi, että voin perustaa jotain elikmakuutta. Karmai on tämän projektInnolla, jota olen hyvää ja s Nasin ja yhdessä yhdessä seriössä, jota on yhdessä yhdessä yhdessä yhdessä yhdessä dan olen tosittaidettu yhdessä yhdessä yhdessä yhdessä ja joten se on ollut 3 p.m. US time and I said yes and I did not realize that I just booked myself a live lecture at 10 p.m. Friday afternoon, Finnish time. So by the time I was done for the week I had to prepare for another lecture and then deliver it to for the week. Okay, and then we started discussing what's the work topic I would like to present. I've been thinking about what are the kind of things that people struggle with. That very few teachers and very few textbooks teach and we thought about model diagnostics like what do you do when your model doesn't fit? How do you diagnose misfit? And then I threw this idea that how about we take a look during that one hour seminar model on converges. So what do you do when your model fails to produce results? And that was actually very interesting. I thought about that topic for a while. Because I had myself encountered some really difficult models to troubleshoot. I did that those models with STATA and we actually had to contact STATA users support because the STATA wouldn't even start to estimate the model because of missing their issues. And eventually I got help from the users support and they told me that there was an undocumented feature which I could use to get my model. I've been also teaching non-convergence to students. So I give the models that don't produce results and then I teach them a set of diagnostics that they can apply to those models. And I give them a task of figuring out why the model doesn't work. So model non-convergence when you don't get results at all or you get partial results or standards are missing or there's our results but some kind of warning. They basically boil down to two different classes of problems. One is that your model is not identified or the model is not identified given your data which is empirical identification. So it can be a model problem and in those cases the only thing that you can do is fix the model. So sometimes people are asking their models they are trying to estimate things that can be estimated. So if you have one correlation then you can't estimate paths that go from one direction to other and then back. So you can't estimate bi-directional effects from one correlation. Another class of problems is computational problems. So it might be in principle possible to estimate your model even with your data but for some reason the computer fails to do so. And the way I started with SCM I encountered this problem right away. I think I used the same package for R not LaVon but the older SCM and I got a message from my first model that Hessian is not positively definite and I have no idea what that meant. I know now and I was somebody that thought me the meaning of the Hessian matrix and how it relates to your R to your estimation. So you get these fairly cryptic error messages or you might get endless output just likelihood and some value just printed over and over. The value gets gradually smaller or I mean closer to zero so less negative so it gets larger and then but there are no estimates. And the way people deal with these problems typically that you get really frustrated you already understand what's the problem and then you're bang your head against the wall trying to just trying different things without any clear strategy and hoping that some of those things fixes your model so that you get some results. And then when you get your results you conclude that you solve the problem and report the results. These strategies probably matter because sometimes when you change your model for example you might fix an issue that is not the issue that you are facing but you might actually do constraints that don't make sense they make the error disappear but they don't solve the problem and then the results of course are not very trustworthy and these are really difficult to detect. So in the seminar I gave a talk I used some of my materials that I teach to doctoral students on that last course that I run and I did not go into the technicalities of how the numerical optimization works within your SCM software. But we took a look at a couple of case studies like we had a starting value problem so I showed the audience what the starting value problem looks like and how do you diagnose it. Then we also discussed what it means for the model to convince what is required and where it can fail and I realized that I had lots of material in my video library that is related to our convergence and I told the participants that I would prepare a playlist that contains explanation of these problems in a lot more detail than we have time to go through in that like one hour seminar. So this is the playlist and the playlist starts by explaining what is convergence and then we start looking at in which ways it can fail. So the first thing is identification there's some material identification. The second class of problems that you face are related to numerical optimization so I included a brief tutorial or not that brief it's like maybe two hours of video material on maximum likelihood estimation and how does it really work, how can it fail. And then in the end I have a couple of diagnostics, a couple of diagnostic strategies that I like to use personally that involve the use of Hessian matrix inspecting starting values estimating smaller models and then using those are starting values and that kind of things. And then the final video on this is for a person whose name I don't remember but there was a person in the audience in the karma seminar asking how do you learn these things because it is easy to listen to somebody explaining how you do troubleshooting of non-convergent models or let's say easy because it requires some background knowledge if you don't have that background knowledge then it might not be that easy but you might listen to somebody explaining and demonstrating how they troubleshoot a non-convergent model but how do you do it yourself. So how do you go from understanding in principle how to troubleshoot non-convergent models to actually being able to do it yourself. And here is where practice comes into play but the problem is that there are not that many good practice cases available. If you take a look at textbooks or user manuals of SCM software they typically tend to present just examples where the model converges works but there are very few examples where things don't work and then explanations of what you can do about it. For example stator user manual which is really great contains a section on our troubleshooting but it's on a conceptual level there are no examples then I was thinking that okay so how do you come up with examples and during the seminar I came up with this idea that you can come up with nice teaching examples you can teach yourself when you can work with a pair by taking an existing data set and a more combination from a textbook or user manual of your software and then breaking it. So try to make the model not converge it's actually fairly easy to do if you know the different causes of non-convergence then give that broken model data combination to a friend and ask the friend to figure out what is the problem and this is a lot easier to think to do than troubleshooting real data sets because if you don't have much experience in the diagnostics you are basically using diagnostics that you don't fully understand on a problem that you don't know what it is and then you hope to be able to come up with something useful so if we take one of these unknowns out so let's say that you apply these diagnostics that you don't fully understand to a problem that you know what the problem is then it's a lot easier to get those diagnostics to work and once you can apply the diagnostics to cases where you know what the problem is then it becomes much easier to apply those symbiagnostics to real data sets that are problematic but for reasons that you don't know so that is the last video I hope you enjoyed the playlist