 Okay, welcome to ESMCOM for 2022 and this workshop on structural equation modeling. This workshop is being live streamed to YouTube and has a group of participants taking part live. Welcome to all of you here and all of you watching on YouTube. If you are watching via YouTube and a question for our presenter, please reply to the tweet in the ES hackathon Twitter feed. And we'll try to reply to those as soon as possible. Also, we'd like to draw your attention to our current conduct, which is available on the website, which is ESMCOM.github.io. So it's with great pleasure that I introduce you to Aaron, Aaron Vasu, who's at the University of Canterbury in New Zealand. And he's going to talk to us today about structural equation modeling. Now really looking forward to this over to you, Aaron. Thank you. Thank you, Matt. Hello and welcome everyone. My name is Aaron Vasu and I am from the University of Canterbury. I am an associate professor of environmental health. And today we are going to discuss and save workshop about metanalysis but we will be using structural equation modeling. So let me share my screen and we will let's go let's go over some of the initial things that will we will be doing in this workshop. So I am going to share my screen. So Matt, is it okay that I take off the banner and then put my screen in? Yeah, that's fine. So the question, let me do that. And so I will be sharing this share sound, optimize for video clip if there's any. Share it. Can you see my screen? If you can. It says that participants can see my screen. Yeah, we can see your screen now. Turn on the slideshow to start with so that it becomes easy for you. We can you can exit the full screen or you can do however you like. I think I'll probably have to exit the full screen at some point. So one of the things that's quite important for us in doing the structural equation modeling. Workshopping this is this that this is a fully interactive workshop which means that please feel free to interrupt me at any time that you would like to do and ask questions, post observations and things like that it's a fully interactive workshop. In that sense, I don't want to be the talking head all the time. Second thing that I want to emphasize in this workshop is this that we will not be able to cover everything because both structural equation modeling and metanalysis are kind of in a semester long courses that people have been doing and you know I mean you can multiple semesters of structural equation modeling and metanalysis going forward so we I don't want to compress everything also in a two hour workshop so I will probably be touching only the surface and the problems of it. So you're more than welcome to continue this conversation even after this workshop if you're interested. On the front screen, you can see my email address are in them dot pursue at Canterbury dot AC dot NZ. I am also active on Twitter, it's our in basu. And those are the two main channels with which in which you can come in contact with me ask me questions and we can continue this conversation and for this workshop I have also got a GitHub repository setup which I will give out in a bit. And to go over the the plan of the workshop is going to be something like this that I'm going to talk about a metanalysis and some of the key things about metanalysis that makes it particularly useful for for structural equation modeling to be segwayed into it. And emphasize that metanalysis essentially is a multi level modeling exercise. I'm going to talk a little bit about what metanalysis looks like to the eye of structural equation modeling and structural equation models. And for this workshop, we will be using an easy to use I say easy to use but difficult to understand. So I'm going to show you our module and our program called metasem, which is written by Mike Chen and Mike Chen has done phenomenal work, including writing a book and a very readable paper in the Journal of statistical software on metasem. I encourage you to look it up. Just to mind in terms of conducting metanalysis and with structural equation modeling is Mathias Herrer. He has already spoken in this conference it's a fantastic webpage. If you can have a time, go take a look at it's called Dimitar and we'll introduce that to you as well. I will be speaking mostly and only about univariate and multivariate metanalysis in this workshop, however, applications of structural equation models in metanalysis extends beyond this. So you can do things like synthesize entire correlation matrices and then use structural equation modeling to analyze them. And the other thing that I probably will have not time to touch is the, there are some other details and some other things of structural equation modeling and metanalysis that are quite useful that I will not be able to touch in this workshop. And of course, as you know you're free to ask any questions discussions and I will be keeping an eye on the chat for any questions, just in case. If you have any questions, please feel free to draw my attention and ask questions using your, you know, hand symbol or using a thumbs. However, you may feel comfortable in asking those questions. What happens is because this is a workshop and what I would also set the scene and tone here in putting together a Google Docs document and what I'm going to do is I am the Google Docs document lives here, but I'm going to leave the Google Docs link in the in the chat box. So let me see how we can do that. So if I take the Google Docs this is the Google Docs document. Can you see my Google Docs document. If you can, can you please give a thumbs up or something I will also post it in the chat just in case you want to connect to it now. What happens to the Google Docs document is this that I have. I have put together the list of your attendees so if you can please leave your name your email address and your Twitter handle if you have one. That would be really useful. And then there are a couple of questions. Don't answer these questions yet, we will get to these questions in due time. The main thing if you can kindly put your name, your email address and your Twitter handle. That would be really useful, because we can continue in this conversation afterwards, once this workshop is over. So maybe perhaps we can wait for a few minutes while you can do that and see if you have got any questions going forward. What questions do you have before we get started with the first things. So maybe we can give people some time to fill in their email addresses and Twitter handles, and then we could do that. So we'll start in a minute. Let me finish you and if there are any other questions, please ask me. Number one, the second requirement for this workshop is this that when we signed up I kind of Matt and Neil sent you an email asking you to if you can kindly download an RStudio instance. So if you could, if you have got an RStudio account already set up. Can you give me a thumbs up to say that that's the case. If you could put it there thank you very much and quite a few thumbs up going up now. I'm assuming that okay you have either an access to an RStudio or have gotten an account in RStudio cloud. That's really useful because that really will help me to, to show you some work which you can then reproduce in RStudio. I will be moving back and forth in between my RStudio instance and this presentation so that you will have a sense of what actually happens in the in the real life as we go ahead with with metanalysis and using structural equation modeling. The other thing that that's quite useful for this workshop is a structural equation modeling package called Lavan. Lavan is actually a very useful and easy structural equation modeling package but unfortunately, at this stage, Lavan is not very well suited to do metanalysis the only thing that you can probably do is a fixed effects univariate metanalysis. But as we will be starting writing more and more and as time will go, hopefully, Lavan will add more features so that it can be used in future as a good metanalysis conducting package as well. But by now we have a very good and very robust metanalysis conducting structural equation modeling package within R and that is called open that is called metasem written by Mike Chin and metasem is based on open MX. Open MX is a very useful and all-encompassing structural equation modeling package. However, in this workshop I'm not going to cover that because that's another large big topic in itself so we'll have to take some cards. However, I am going to show you using metasem what are the different codes that you can work with and this is so intuitive it's almost like a black box. So we'll have to probably unpack some of those things as well. The last thing that I wanted to draw your attention to is a software tool which is referred to as ONIX. It's called actually SigmaNIX but for English speaking people it could be called as an ONIX. It's a German website but it is a nice Java program where you can draw structural equation models and you can generate Lavan codes and open MX codes and all sorts of codes and you can then run it if you have the software available. So I'm going to show you how it looks like and here is a download page for structural equation modeling ONIX and here is the homepage for ONIX and it's a very sparse simple homepage and I'll show you how we can use ONIX to create structural equation models. So those were the three preliminary things that I wanted to talk to you before I went ahead and off we go. So the first question that I have for you is coming up now in your Google Docs document and it says that as a researcher or as a reader of research and meta-analysis and things like that how many outcomes have you seen reported in research papers? Only one, two at most, at least two. So if you can put a cross mark next to the options that would be really useful and then we can launch into some discussions. So quite an overwhelming number of people are already started answering that at least two are something that you get to see to see as outcomes in the research papers that you get to read and the researchers that you do. So it's almost never in our real lives that we have only one outcome for one intervention. Although when we start writing about Pico questions for conducting our search before we embark on meta-analysis you will see that quite often it's one intervention and one outcomes and that's kind of a framework that's very commonly used in almost in a number of different disciplines. Although you know that it is practically the way it works is that we get many outcomes for an individual exposure or an individual intervention that is the norm. So it is also therefore possible that these outcomes are correlated with each other. So one outcome kind of feeds on the other. I'll give you an example. At the moment we are conducting a study where we are looking at how organizational practices. So people in practices that are used in several offices or workplaces such as morning yoga or meditation practices or policies. How do they impact psychosocial outcomes of the workers in those workplaces? In other words, how does workplace based interventions impact psychosocial outcomes of the employees in those workplaces? And we are conducting a meta-analysis. There have been numerous studies. We identified something like 100 plus studies and then we are now binning them into separate groups. And we are seeing invariably that the psychosocial outcomes that we are reporting are something like anxiety, depression, people not reporting for work, high rates of attrition from the workplaces and so on and so forth. Now you can understand that some of these outcomes are closely related to each other. Say anxiety and depression are kind of related to each other. Now if we conduct separate univariate meta-analysis then we can get individual snapshots of what are the outcomes for individual interventions. That is informative. But it is far more informative if we were able to conduct meta-analysis where we were able to show how do these interventions work when we take two variables taken together and that is the challenge of multivariate meta-analysis. Now using standard meta-analysis practices as we do normally, we're using packages like meta, but not so much a metaphor, we'll talk about that later. You know, it is useful. If you use, say for example, RAVEMAN, for example for conducting meta-analysis, very useful, very suited for using one outcome, one intervention, it will give you all those things. Of course, there is the issue of network meta-analysis, but we are not going there yet. Now, when you do that, you are doing a single variable meta-analysis and that has been a standard practice for many, many years. So what does meta-analysis look like when you take two variables that are correlated with each other together? How would it look like? So that's a challenge that we need to understand and see whether we can do something to do it, do that kind of stuff. Another challenge that we quite often face is that in many sciences and in many fields, we see that people report not individual estimates, but they report correlation matrices. So very common practice, say for example, in twin studies where you deal with or with environment and, you know, gene environment interaction kind of studies where you start with correlation matrices. So therefore, there is a case to understand what do those correlation matrices tell us? So rather than only looking at something like coefficient alpha or a point estimate, how could we use to synthesize entire correlation matrices and then reanalyze them together? So in other words, take these things and synthesize the correlation matrices, generate a synthesized correlation matrix from all the individual component studies, and then analyze it ourselves. That's also, you know, your research and evidence synthesis and meta-analysis. Now, using standard meta-analysis tools that we commonly use, it is quite difficult to do that. It's impossible to do that because they're not set up to do that kind of work. Those are some of the areas where structural equation modeling shines. But before we get to the nitty-gritty of structural equation modeling and show you some of these things, I would like to take you back to my lecture where we would like to talk a little bit about and maybe discuss a little bit about what does it mean to do meta-analysis? Well, at its most basic level, if you think of a study, what we are trying to do in a study is we are trying to estimate a true population estimate of the phenomenon that we want to study. For example, in our case, if we were going to study what is the association between organizational or, you know, institutional practices or, you know, workplace interventions and psychosocial outcomes, then perhaps we are studying it using a you know, standardized mean difference or mean difference or HHSG, whatever it is, what is it in the population? If it were offered in a larger section of the office populations everywhere in the world, what would a particular interventions outcome would look like? Let's call that mu. But what we have in hand is the observed effect size from an individual study that we then synthesize. So that's why we have given it a hat called theta hat. A is a key study in which you get to see this, but that is trying to assess the true population estimate. But then there are a couple of other things that we need to think when we start thinking in terms of meta-analysis. One of them is this, that an individual study always consists of individuals that are included in them. In other words, it can be human beings, it can be widgets, it can be other things. These are these vary within the study themselves. So that's why we call it epsilon for a particular study K, and that is the variance of that study. So it's a within study variance. But then we go a step up. And what we try to do is this that there are many studies that are put together. And these studies themselves vary between them. And that's the one that we are putting here in the form of a tau. So you see that for the Keith study, you can still see see this. And you could say that, you know, are in what you're showing us is essentially one of a random effects meta-analysis, you know, estimation. And I say, yes, that's true because what happens in a fixed and random effects meta-analysis is that we'll come back to that later. We'll ask you that question later. But for the time being, let's look at it like this. So what they within studies, you know, variance does is it gives you the variance of the studies, that is the studies locally within the studies themselves and tau squared is the one that we get for between studies variance estimates. So if we put them together, then we get a complete picture of what the total, the total variance looks like in the entire corpus of evidence that you want to synthesize. Here is a metric that we commonly use. And that is the comparison between or the proportion of the total study variance, which is which consists of the tau square, which is the between study variance and within study variance, the V. And what proportion of that is constituted by the between study variance and that gives us a measure of how scattered our studies are, the estimation of heterogeneity and commonly expressed in the form of I squared. Now in health sciences in medicine to some extent we have got a rule of thumb. We say that if I square is something like 25% or lower we say it's low heterogeneity or homogeneous quote unquote 50% moderate 75% 52 75% low to moderate I mean medium to moderate beyond 75% studies are very heterogeneous. So if they're heterogeneous, we tend to do some kind of a modeling if they are not heterogeneous if they're homogeneous, we do another kind of modeling right in metanals. So here's my question to the group. What, what is your, you know, take on fixed and random effects model. Let's have some conversation on this. So this is not something that I have put up or you can, you can put your answer in the, in the Google Docs, if you wanted to write something on the Google Docs or if you wanted to open your mic and say something more than welcome. So I'll stop for about five minutes and let's discuss this and see how we, how we go. So what would you say about tests of heterogeneity and about relationship between fixed and random. So what are you take, what are your preferences, what do you do. I'll tell you some stories that we had had, I had experienced before, but I want to hear your impressions. Anyone, any comment on your take on fixed effects versus random effects metanals. If I can. So in my field in conservation ecology, more conservation than ecology with our heterogeneity is so huge that the cold never to use really a fixed effects, because it doesn't really make sense to use a fixed effects with that amount of heterogeneity, I think. Yeah, so again, I mean it's a matter of field, of course. It's a very good point Matt. And, yeah, before I tell my story, I'd really like to get a few more impressions and comment says one comment that has come up in the Google Docs is that random text care of the variability across data sets, but not using all the data but considering that the studies are like a random sample of all the studies in the universe that they're there. Okay. So, we will let this conversation run for about another five minutes to see how we are doing any other comment anything that you have experienced from your field like Matt was mentioning that in conservation ecology and I'll tell the story of what we have. But I had experienced one time. Another setting doing metanalysis in randomized contrast only. Yeah. I'm just beginning. So I'm completely new to this, but so using random effects models for accounting for repeated sampling designs. Yeah, and then so so I guess this is how to do that with a fixed tax model. Yeah, good idea Matt. Any other comment anything else. What are the comments you have this one comment that has come in Google Docs is they differ about the true effect assumption. Yep. Random effects model was used as if they can magically solve the hit region IT. Very interesting comment. We'll take a look at it and wait to see how these things are related. Many years ago, we were doing some work for the government and what they preferred at that time and they still do I suppose is when you analyze synthesize information we need to they they were they these two commissioners to do what is called systematic reviews, which means that we would look up by using a standard set of questions. A number of studies and then we would be summarizing the results of the studies and if that was warranted then we would be conducting what is called meta analysis. One of these policy makers once told us that if you face there is heterogeneity then do not synthesize anything. We are only going to use studies that are homogeneous. So give us only fixed effects meta analysis results and things like that and you know and we were wondering, oh well, hold on a second. What is going on here. And so these things have become such such a bone of contention that quite often you will find that many people will call studies. So you get a large number of studies and then you restrict your selection criteria to such an effect that you get a very very small sample of studies and you just derive your point estimates only based on those studies. So you may wonder that such an approach is quite limiting because in effect what happens is this that fixed effects meta analysis is a special case for random effects meta analysis. So let's go back to that equation a little bit and see what is going on here. The first one. This, this graphic comes from Mathias Harris. Excellent program, Demeter and his accompanying book and I encourage you guys to to take a look at the book the book is really good because here is the, here is the book called doing meta analysis and are some guide. It's very nice. And this is taken from the, from the chapter on multi level meta analysis that he has put up in the in that book. So, let me explain what I mean by this in here. Now, when you start thinking in terms of meta analysis. You start thinking in terms of where do these data came from. Siting examples from health sciences and medicine, for example, I say that these data came from human beings, or these data could come from an environmental sets of objects. The data could come from widgets. I mean, it does not. You know, we, we're not going into the nitty gritty of who our units of analysis where in the first place, let's say these triangles represent human beings. So these, in a sense, the participants that units are our level one from where our within study informations are coming from. Now you see that the bad distribution pertains to this particular study and a variation is arising from there. And what happens is this that researchers when they report their studies, they synthesize that information at the participant level, and then they give you a point estimate and the variance around that point estimate. And what we do in meta analysis or evidence synthesis is we take those point estimates and the variance that arose from those individuals in those studies, and then we synthesize them at level two. And when we do that, and when we do that, we are now getting it a notch a level up. That's where we get our, our analysis from. So the levels of variation that we get here are the levels that if these studies were the only ones and nothing else, then we could we might as well have stopped at that level and just synthesize one report and be done with it. But because there is so much variation that we normally expect. And therefore, a random effects met analysis is probably a more natural and normal way to do everything. And then we decide whether we are going to set the tau to zero, so that we ignore the random effects constraints and we say that we set the random effect to zero, we report only fixed effects. If the fixed effects and random effects closely match. And we if we have got reasons to believe that there is not much heterogeneity. The studies are homogeneous if they pass the test of homogeneity then we report fixed effects, and we show that fixed effects and random met analysis at the same. Essentially what we are doing is we are, we are conducting a two step model in conducting and in generating our point estimates in the form of a meta analysis, but wait, there's more. Anyone who has ever read a meta analysis or has conducted a meta analysis knows that we do not just leave at the point of reporting a point estimate, but what we instead do is that we start spinning the studies. Okay, so we cluster the studies in various different clusters, perhaps the studies that are good quality studies that are poor quality. Perhaps we cluster the studies based on certain countries, we, you know, we can cluster the studies based on some other parameters that we think that are quite important. There is another thing that's also very important in case of meta analysis and this is the case of meta regression. And this is a particularly important piece of work that economists often do when they do meta analysis because they take often, you know, hundreds of studies taken together. And for them meta analysis has got a meaning where they take variables that are contained within the studies, the point estimates, partial correlation coefficients, and what they then do is that they run regression models on those studies on those, on those, you know, point estimates and that's their meta regression. That's also very important for us. Meta regression in the context of medicine and public health is often also referred to as sensitivity analysis where various parameters are tested to see whether they have got a bearing on the estimate. But so what we are trying to do here is this that we are binning the studies we are categorizing the studies we are clustering the studies, and then we are examining them and that is important because this information came from somewhere. So in effect, your random effects meta analysis tau square is a composite of two of these things. The whole of it is your random effects meta analysis, but then you can divide it into two parts, one in which what are the effects of the various other variables that are in my in my meta analysis that has got a bearing on the results. So our meta regression becomes incorporated into a meta analysis by itself by default. This introduces a third layer. And so that third layer is so now you know that meta analysis essentially is a three layer multi level problem of which the random effects meta analysis is a special case, which is like a two level meta analysis. And on one level of single level meta analysis, the fixed effects meta analysis is a special case of the random effects meta analysis. I'll stop here because there's a lot of things that I've bombarded you with. And I'll take some questions and some thoughts and observations over to you guys. So what do you think does this make sense. Does it make no sense. Am I fuzzy. Am I wrong. The floor is open. Let's say for the next five minutes. Let's have a discussion on this. What do you think, what do you guys think, perhaps I can break the ice and say, who among you do three level meta analysis, all the time. I think I've done some of these before where we've had, yeah, study level heterogeneity and then within study level heterogeneity if I'm understanding it correctly. I think this hierarchy pool idea strikes a chord because when you're extracting there, you'll find that some others provide really detailed data, maybe you can do like an individual level data and you have some of this kind of way of thinking about it makes a lot of sense. No, thank you. Thank you, Matt. Some other thoughts. And it's, and it's a muted for you. Maybe no. Oh, no, no, no, no, you can hear you. Yeah. Yeah. That's saying yes. Can you know. Yep. Any other thought about what are the thoughts that you had. The idea here is this that anytime you are conducting a meta analysis, whether you acknowledge it or not. And anytime you're using what is called a meta regression within a meta analysis to identify what are the effects of various other variables that were noted in the studies. You're abstracting information from a study where you are incorporating and including other informations about the study that may have a bearing on the outcome. You are conducting a three level meta analysis. You're definitely conducting a two level meta analysis. The only thing is this that you're setting your parameters such that you are, you're confining yourself into a fixed effects meta analysis and doing a level one meta analysis. But basically, you are doing a level, you're, you're doing a three level meta analysis all the time. And we will see that in a bit that it is the nature of the game when you're doing an univariate meta analysis. Most of the times. Anders. Again, I think you heard me at the end. Yeah. So I've never done a meta analysis before. So just coming from as a naive first timer. How is this any different from having a field ecological study where you have samples in different forests or countries. You would never not do hierarchical mixed effect models, because the tools are there they easy to use. You would never just do a fixed model, just by default. And if you found that there was no sort of between a variation between the random effects then you could drop them. But why would you that what do you have to gain. So why is it different in meta analysis. It's not different. It's just what we, what we tend to do it's kind of, you know, we speak in prose that kind of a thing you see. So there's not, I mean, when you will start realizing that okay yeah we are doing this all the time you see. Yeah, we just don't say it like that we are not saying that we are doing a hierarchical in a modeling but we are actually indulging in that kind of an exercise. And we are doing this work. So let me recap once more what we're doing here. When we are doing evidence synthesis, we are dealing with papers right these papers contain results from individuals and these individuals could be individual human beings would be other entities whatever that is individuals. Yeah, so the results contained in the papers that we report as point estimates and variances are the ones that came from some people somewhere. These people were different right, and that's the reason why we have got a point estimated variance around reported in the paper. Then, and that's level one if you wanted to just do that and do nothing else. Fair enough. Okay, so we would have to assume at that point is that these papers were just in themselves enough for us to give everything that we wanted to do, we would summarize them and we're done with it. But in real life that's not the case as Matt was mentioning earlier. Yeah, in your, you say in ecology for instance, and definitely in case of health sciences and many others where we can never assume that that's it because there's so much variety then we say a step up and we say that look you know we now need to account for what happens if these few papers were belong to a universe of similar papers which means that they have a random sample of all the papers that we could get them. So that's what our level two comes into the picture. But then we go a step up, because even though those papers were there with us, they were clustered. Yeah, not everything is coming from one unit exactly as you were saying Anders that okay I'm going to get something from, you know, maybe from from from another one is going to come from, say a studies that are focusing on wetlands, and some are coming from say some other other other geographical region so some other in one of those situations, and now I'm going to synthesize, but I can't synthesize everything like apples and oranges. So that's where the level three comes into the picture see. Yeah, and in our analysis that level three gets noted in the form of a meta regression said so we get this all of the other variables and see how would our effect estimates would vary if we wanted to take into account those clusters and there are ways in which we can do this. If this is a multi level exercise at all times, then what are some of the tools and what are some of the approaches that we have in our, in our toolkit, a very useful toolkit is to use a package called metaphor, which is very useful. But what I'm going to show you in this workshop and we can workshop on this is the use of structural equation modeling so let's deal with structural equation modeling a little bit. So here is what this world looks like in a structural equation modeling world and let me explain this, let me minimize this now and see what questions do you have a part of your screen is black not sure why. Yeah, it's just that window that's it is another window that you just moved that's it. Okay, now can you guys see my my screen well is my screen visible to you all. Okay. So this is what it looks like in the world of structural equation models. So let me explain this. Before I went ahead and made a fool of myself I just wanted to make sure how many of you are already familiar with structural equation modeling. Just a show of thumbs or hands up or whatever is going to be useful. Matt Jones, Zaynep. Matt, both Mads, again. Okay, and there are a few of you who have not used SCM before there's a thumbs down from Arthur. Is that because that you are. All right, there's a thumbs down from Bettina as well. I mean, that is that you're familiar with it or you don't like it or whatever I take it that some of you are familiar some of you not so those of you who are familiar of course it needs no introduction but those of you who are not familiar with the symbols and languages of the other windows come up again. Okay, I'm going to do that again. That's because of this. How about that. Can you see me now. Do you have a window now, clearly. You say yes, something. Yeah, that's great. Yeah, all right. So those of you who are familiar with structural equation modeling, it'll need no introductions and maybe a bit stop but those of you who are not familiar with structural equation modeling. What you are seeing here is the presentation of a two level meta analysis. So it would look like in structural equation modeling. So from left to right and from top to down, you see that there is a triangle and that triangle is called a constant. And it always receives a value of one. So that an arrow extends from the triangle to a circle, and that arrow will have different meaning based on different contexts. So in this particular context, the arrow means that beta underscore are is the is a mean random effects. So if you're doing a value of one arbitrarily user, don't look at that number 1.0. It's it's it's a pre assigned value but beta underscore are will assume or we get some value. And that's the mean of, and that's the mean of this mean of the effect size that you do get to see there. Okay, so that mean structure here the constant is lending that. Mean value to F1, which is a circle. And that if one that you get to see is what is known as a a an unobserved or latent variable. This variable is not something that we get to see. We do get to see the square box, which is our point estimate of an individual study, but we do not get to see from where do those, you know, observe variables arise. So basically, this is why that circle, which is an unobserved or latent or hidden variable, then explains the variation in the manifest variant. The, the curved arrow that you get to see with two heads is either a variance or a covariance if it occurs in between two variables, then it's a covariance. If it is occurs only in the context of a single variable, it's a variance. So that is my tau square, which is the variance of the random variable that is present here, which is a representative of the between studies. Okay, so we call that a random effects latent variable that is will assume a particular that will assume a value of its, of its random effects, which is based on mean structure that you get to see here, and its variance is a tau square that we will be estimating in the model. But in the within level studies, that is in the individual study level, we know that there is a point estimate and there is a variance. So that's the V underscore I is the unexplained variance that we get to see for X one. And all other variability is explained by the random effects effect structure here. Okay, so this is going to become a little bit clearer as we start diving into structural equation modeling a little bit more so that it will start making sense, but for the time being, I will request you to hold on to your mind that there is a there are two levels of this analysis, because I just wanted to simplify things. For now, I did not include a third level here, but a single level, a two level meta analysis looks like this. There is a within level. There is a between level, and the between the with the with the between and the within level, we are getting a full picture of a random effects meta analysis. So why is this important is because if we can set the tau square to be zero, then this arrow moves directly from here to here, and we don't need to worry about that because that's going to be our fixed effects model. So we can manipulate this. We can manipulate this variable, we can set this variable to zero. And when we do this is going to give us our fixed effects meta analysis. Okay, so we can easily switch between random effects and fixed effects meta analysis in the conceptual understanding of conducting a structural equation model in order to conduct a meta analysis. So, before I went ahead and showed you with codes. I just wanted to dive in a little bit in in some structural equation modeling chops. So that's why I am going to do some work out examples, we will be using meta same before I dived into that just because a few of you have not used same before. Maybe we can I can show you some of the structure question modeling stuff. Okay. And so I will be moving in between. And our studio instance. I will be moving in between an R studio instance and some visualizations and, and this screen. So I beg your pardon, if you have got your R studio instances that you wanted to play with play along more than happy to do that. So Matt's question was that. So is the one constant like the tilde one bit in your random effects model exactly that is exactly the point here. Yeah, so one constant is like that one. Yeah, it's a set. Well, it's a little bit like it's, it's, it's not that one then vertical bar, but that's the clustering variable. What we are doing here is not really clustering, but giving a intercept or a mean structure so think of that triangle that one constant as a meaning as a, as an intercept in a regression model, rather than the vertical bar. So we are not, we are not, we are not using the clustering or indicating the levels. That's a different concept. Yeah, I think so. So if he, so can I have a look at this slide again, the previous one. And so, so your response variables below the dotted line, that's, that's your effect sizes. Yes. Yeah. Okay. And then you've got Yeah, so, so think of this one as a regression model. So X one is regressed on F one. Yeah. Yeah. And, you know, if one receives a mean from here, because if one explains X one X one has got his own variance that is explained by F one, which is an unobserved variable. Yeah, that will, that will explain the variation in the X one. Yeah. And in setting up that regression equation, we will have a and an intercept term which will come here, but the mean of that is given by beta R. Okay. If we had a, if you had an arrow that went from here to here, that would be the intercept term, but that's not relevant here. What's relevant here is the random, you know, random mean, or, you know, random effects. So there's no fixed effects here yet. No, the fixed effects is only going to come if we set this to be the tau squared to be zero because then the random effects doesn't make any sense. There's only one and that's the, that's the fixed effects that you get to see. Okay, thanks. Yeah. Once we get into that exercise, you'll see what I mean by this, or we can come back to it again. All right. So, let me bring up the R studio cloud and I'll show you a few other things that are quite interesting. So I will resume from my sleeping R studio and see how it works. So quick question to the group. Can you see my R studio once it boots up and if you can if you can kindly give me a thumbs up and I'll know that you can. Okay, starting to load. Okay, so let me see if I can. Right, can you see an R studio kind of set up. Yes, we can see an app codes and codes and codes and codes and codes and codes and codes. Yeah. Yeah. Yeah. So that's something called our chunk one and all that kind of stuff. Cool. All right, let me go back and see what I can do in this hour. Now this is again for those of you who are new to structural equation modeling, you will see several of these symbols. Okay. So the square symbol means manifest or observed variable. The circle means latent or unobserved variable which means that a variable that you have not seen or you don't know but you're going to measure from the measurement variables. The arrows means directional relationship, which means that okay if there is an arrow that goes from a circle to the square, then that circle from that circle to square tells you that there is a there's something going on there. You know, I mean that latent variable is either explaining the manifesting manifest variable. If there is an arrow that goes from this constant to a square or a circle, then that is either a mean or an intercept depending on what kind of variable we're working with. If it is an exogenous variable which means that if it's the variable that is going to explain something, then that is a, that is a mean. If it is an endogenous variable, which means that if it is a variable that has been explained by something else, then that is going to be treated as an intercept regression equation. So that directional relationship depends on the context in which it is present. And then there is a non-directional relationship which means that this is either a covariance or a correlation or even a variance. When one variable varies with itself, then that's a, that's a variance. If one variable varies with another, then it's a covariance. It was too fast. So what questions do you have? Let's see if there's a question in the chat box. So could you perhaps say something about the utility of piecewise ACM in all of this. The advantages may be to mix and match error, yes, and familiar coding, yes. Yes. Piecewise ACM is exactly what you will be doing when you will be doing this kind of work, although you did not always do that. But yeah, piecewise ACM is essentially what we are doing here. Okay. So let's start with a very simple, simple instantiation of the, a very simple, we call that as a measurement model. Okay. So onyx is a, is a program where you can do this, you can, you can create these diagrams in onyx. And it's really very simple to do this work. And I will leave this exercise with you rather than showing it here, because it's a little bit time consuming to draw all the diagrams and arrows, but perhaps I can show you as well. So I'm going to stop sharing the screen here for now. And then I'm going to show it here there right so I'm going to stop share for a minute, bring up the onyx screen, and then I'm going to share the screen again. So now you guys should be able to see an onyx screen. Can you, if you can see a screen that says onyx. Yeah, thank you Matt. So this is an onyx framing or onyx formulation of a structural equation model for what is called a measurement model or confirmatory factor analysis model. So what happens here is this, that this circle called G is a latent variable, this variable remains unobserved, and will be measured using the manifest variables info semial word matrix and picked. So just to give you a background, it's from the Wisconsin questionnaire where people were asked, people were asked on a number of questions in terms of understanding their common intelligence measure which is called a G. And these were entered into a correlation matrix. And what happens with these arrows is this that this G is trying to explain this G explains the variation in the manifest variable info. And this bit a here of course is 1.0 that's a starting value but a is the path coefficient that goes from G to info. And that path coefficient if you square it, then you give its explained variance, which means that what proportion of informations total variation is explained by the common intelligence or G. We live an unexplained variance, which is the error variance, and that also is unobserved, and we assign it and, you know, therefore we assign it a latent variable status. This is error variance we fix it to some values, perhaps the path coefficient that goes from the error variance to the information variance. And that tells me to the extent to which this one is explaining the, the, the, the variation in the information as a variable. So what you can do in Onyx is you can create variables so I can right click on this, and then I can create a variable. I maybe I want to create an observed variable, it will, it will create an observed variable. And then I can select on the observed variable, or I can select on the latent variable because I want an arrow and add a path from the latent variable to go into my observed variable. Right click on this, add a path, add each to my observed variable and the path gets added. And then I can tweak these path characteristics to suit my model to work with it. Likewise, I can add a another latent variable which is going to be an error variable which is another latent variable. And when I do this, I will draw an arrow from this latent variable to my manifest variable, and it goes there. What happens is this that if you have got a data available here, you can generate a script that will give you, you can generate a script if of course it's not working behaving well but you can generate a script that can either be in OpenMX that can be used in OpenMX as path model or a matrix model, or in many cases you can use a Levan script which you can then directly input into Levan program and it is going to run. So you see that structural equation modeling, once you get to know the theory of it has become pretty easy and trivial nowadays to execute in your real life using a tool like Onyx and Levan. Let's set that aside and let's come back to our screen where we started. Okay, how about that. So I'm going to stop sharing the screen and come back to the screen that I was sharing before. So using Onyx, we had drawn a diagram like this. Let's see there is a chat question. Oh, that's nice. Good. Thank you, Matt. So we have created this confirmatory factor analysis with Onyx. And now what we're going to do is we are going to generate a script by hand in Levan, or we could do that in Onyx as well. And then we will specify a model then we will fit the model and then we'll examine the model in that order. So we start with data, then we specify the model, then we fit the model, and then we are going to examine the model. And then we can modify the specific parameters of the model and then re specify the model and work with it. And the cycle continues till we find a good model, the best model that we can work with. And remember that all models are wrong. Some models are useful. I mean, you know, as George Box used to say. So this is the code. Okay, let me go over the code slowly so that if any of you are willing to write it in our studio. Maybe I will show you how it looks like in our studio and then we can run it. But meanwhile, let me explain this. So in Levan, I have I'm going to create a matrix. Now it's a lower triangular matrix, as you can see, it's a correlation. So you can see that 11111 are the variances and the lower triangular matrix of 0.72, 0.64, 0.63, so on and so forth. These are the correlations between these various variables. The column names and the row names are same. And these are for similarity word for word reasoning, matrix for matrix reasoning, picture for picture completions. These were the items that were used in order to infer the general intelligence. Because this is a correlation matrix, I want to work with a covariance matrix because that's what the structure equation modeling prefers to use. And if we use this, we have to provide them with standard deviations. These are the least of standard deviations of these variables. Again, we need to provide the names of these same thing as these ones. And then I can convert the correlation matrix, which is this into a covariance matrix using code to code function. In the next stage, what I'm going to do is I am now going to specify the sample size. In this case, it's 500. And here is where I'm going to specify the model. The model is that G, as you can see, the G is here. And A, B, C, D, E are the path coefficients. So E as a path coefficient for the variable info, which is the observed variable. B is the path coefficient for the observed variable simul and so on and so forth. I have created this model. It's a single line simple model to work with. Then I fit the model with CFA. CFA stands for confirmatory factor analysis. It's a function that takes the model and that takes the covariance matrix because that's the data that it wants. And I have to provide at least the sample number of observations. Once I do that, I can ask for a summary of the fitted model and I can ask for a standardized estimates of everything that I got. When I do that, I get a standardized estimates and I would say, for example, in case of information only, I have got that the explained path coefficient is 0.857. And the information standardized information is 0.265. It turns out that if you take its square, like if you make its variance and if you make its variance here, you will be able to add them up and they will add up to 1. Because that is going to be the totaled, this is unexplained variance and this is explained variance. This is part of the variance of the info variable explained by G and this is the part that is unexplained that remains as its error variance. Okay, so this is the communality factor. And this is the, this is the unique variance that we get to see. I'm sorry I apologize I was too fast, but this is in general how a confirmatory factor analysis runs. We now come to another thing that's called a mean structure and that's where I'm going to explain things a little bit. And I'm going to show you if I can show you with a mean structure and then I'm going to go and show you how it looks like when it comes to meta analysis standard equation modeling. Now, in a mean structure, we have got something different. Now, remember that this is a constant and the constant always gets the value of one. This is my latent variable. And these are the variables on which I made some measurements. What I'm trying to do here is that using the latent variable. I am explaining the variation in each of these five manifest variables. Okay, so a is a path coefficient so a square is going to be part of what I'm going to explain in terms of the, you know, part of the variation in X one that will be explained by LV and the way that works is this that X one is based on XP in the language of path analysis, we call LV sorry we call LV as an exogenous variable because LV does not need anything else to explain itself. It is the one from which we are starting. This one through X five are called endogenous variable because they are within the model that are explained by LV. Now, when we try to find out the mean structures of these, you see that alpha becomes the mean of LV because one is a because one is a constant. And we get when the constant sends an arrow to the LV, then it becomes the mean of the exogenous variable. Whereas, because these are your endogenous variables because these are the variables that are being explained by LV, mu one, mu two, mu three, mu four, mu five. I, I, I, I mix it up. Sorry about it. So this is my intercept term. And these are the means that are that that I'm explaining here. Okay. So when it goes to an endogenous variable, they become the means when it becomes their exogenous variable when they're trying to explain something, then it becomes its, its, its intercept term. Okay. So alpha is the intercept term that we are going to use for the LV. And all of these things are the means that are that are unobserved, but they are estimated means of these various variables. Okay. So if we now put this understanding into the model of a meta analysis, this is what we get. Okay. So in our understanding, X one is the observed variable that we get for each study. And if one which is the random effects model is the one in which that that is trying to explain the X one. So that's the random effects. Yeah. And the random effect is trying to explain this. So this is the intercept term, but that's the that's, that's also the our random kind of random effects. So that's like a beta R or our random effects value that we ascribe to it. And so we can explain we can put this in the language of, of Metasem or meta analysis using structural equation modeling using a library called Metasem. So Metasem is the package, which will, which will make doing these things a little easy for us. There is a data set built in Metasem called EMA K09. And, you know, we'll just take the data set and it's a, it's a data set on, I think, the risk of atrial fibrillation, which is a hard disease amongst people who take two kinds of medicines. You know, bifossonet or non bifossonet. So they're kind of a drug, people are taking the drug people not taking the drug, and they were conducting a meta analysis on that. So, you know, that's what you're going to get. And then what happens is this that for each one of them. I have selected the point estimate and the variance around the point estimate. And then I ran the meta. And what is important here is this that if you are using both meta as the package within R to do your regular meta analysis and Metasem, then there is a name conflict. You will need to unload meta from your workspace and keep only Metasem to work with otherwise this is not going to work. This is a problem with Metasem. And, you know, so that's something that you need to keep in mind. But if you use something like Meta for there's no problem at all. And then you get the summary. So let me run this in our studio and show you. I have a question in the tech chat. So one times alpha is LV. Yes. So let me show it again to see how it looks like. See where that makes sense to you. All right. So here's what I have done I have loaded the library called Metasem. And then I have loaded the data called I've loaded the data called Mac09. And then I will have asked for looking at the header of it. And then I've given you the types of treatments. And then I have selected the YIVI and then I run the meta. Okay, so let me let me run everything one by one. So once I load the data. Once I load the library Tyresim and these three things. It will load the data. And if I ask for the head of it. This is what it looks like. So for the first six studies. It's either an RCT or an observational study. We are not supposed to mix up the studies together but let's say we have done this. And then what you do get to see is this that this is the YI, which is the point estimate. And this is the VI, which is the, you know, various on the point estimate. And I think what they're trying to do here is the relationship between the treatment and the atrial fibrillation, what were the, what were the correlations. So if you, if you go to make this analysis. This is the object Mac as a random effects model. And I'm going to run a random effects model using the meta same if I do this. Then you will see the results of a random effects model coming up here. Okay. The intercept is the one that we were talking about earlier intercept is the random effects random effects model. So when we talk about that that's the intercept that we talked about the beta R. And that in this case has turned out to be 1.80. And the tau square, which is the, which would be which is which is the combined, you know, random effects variance is actually very very small. Okay. All right, question to the class. What do you think is happening here what kind of a without looking anything else. What kind of a meta analysis would fit here. What would you do. Again, we fit a random effects meta analysis, but we might have as well fit a fixed effects mechanism make no difference, right, because look at it. The I square is zero. Yeah, this is a toy study we only got eight studies here. Okay. So this is what you do get to see in this case. Let's say if I wanted to fit a fixed effects model, how would it look like in that case, I would make the re constraints that is random effect constraints to be zero which means that in this case, I would force tau square to be zero. Okay. That would turn these random effects model into a fixed effects model. Okay. Now if we do that, see what happens. It will still give you an intercept. And this intercept is the same thing because what happens is now this one moves this arrow to here, because this doesn't exist anymore. Yeah, so this is still an intercept. Yeah, because there is an imaginary variable here, which has got a tau square of zero. This becomes a phantom variable. Okay, a phantom variable is a variable which is like a, a latent variable with a zero variance. So then we get an arrow that comes from here to here. And this is the intercept term. This becomes its own intercept and that intercept. So therefore this times this becomes the BF and beta F now is no different. It becomes basically point one eight zero. Remember that the one thing that we had seen before was point one eight zero. Yeah, one point eight zero times. Yeah, so point one eight zero. Same thing here is point one eight zero. And you can see that I square is zero. So a fixed effects model is probably a better fit, or you know you could do with a fixed effects model you don't really need a random model although you can report random effects model. No difference between the two. That's an example where you see an univariate meta analysis where you can switch between fixed effects and random effects model using a structural equation model but it's a conceptual understanding that's quite important and you might wonder. Hey, what's the big deal. Why do you spend so much time talking to our to us about something that you do anyway and other things. What happens you've got two or more outcome variables with the same intervention and if these outcomes are correlated. If they're uncorrelated you can use univariate meta analysis doesn't matter no one bothers. But here is where your traditional ones are not going to work as well. So here's what we are going to do. Okay. So the same model works. It's the same thing, except now, instead of one if one I have added another one, and I've added another manifest variable so now it looks like this. What is more important is that these variables are correlated. So this is a covariance that term that I can add. And there is a therefore there is a covariance between their parent unobserved latent variables. Okay, exactly as before, they are going to give us the intercepts. So we're going to get two intercepts. Yeah, and two tow squares. Yeah, and a covariance term. Because that's very important for us. So let me give you the code for the multivariate meta analysis. In this case, we're going to use a data called BCG. And here he was talking about people who are vaccinated their odds versus people who are non vaccinated their odds of getting tuberculosis. So they have given us two vice. So an odds in the form of an of people who were vaccinated, people who were not vaccinated they were the point estimates, and the therefore the variance of that natural logarithm, natural log odds of people who are vaccinated, natural log odds of people who were not vaccinated and the covariance between the two. Okay, because we needed these three things. Because we needed these three things you need you want to estimate this, you want to estimate this, and you want to estimate this as well, because that's the order of your importance. And then what you do is once you do that. You, you ask only for random effects, if you if and then you and similarly you could use things like for you could convert that into a log effect as well. We are only going to go for random effects. And we may ask for equal intercepts which means that we could say that look here what happens if the if we force the intercepts to be zero. And then then what happens. Okay. So let's take a look at this first. So again we have read the data. And then we are going to see what the data looks like. So here are first six studies, they were conducted in different years. And these were the ones in which you will see that you know they were the geographical regions random allocations or not. And we were able to these are ones that they were talking about. Okay, so they were talking about the odds of people being vaccinated, or the people not being vaccinated, the variance of the people who are vaccinated. You know their odds, the covariance, and and those who are not unvaccinated. Okay, on this basis they were deriving their point estimates. So this is a toy data I mean you could, you could play with your real world data in many other ways but I just wanted to show it just to make sense. So there is one point estimate its variance, another point estimate its variance, and the relationship between the two. Okay, some of them may not may not be related some others might still be related. So let's take a look at this one and see what we get out of it. If you do that, you do get to see that there are two intercepts, and there are three toes, because the intercepts are there are two variables therefore there are two intercepts, but there are three cows, because these are variances and variances. The first toe is a variance for intercept one intercept one was for the vaccinated people intercept two is for the unvaccinated people. So tau square for the second group is going to be for the unvaccinated people. And the tau square, which is the one of our interest is going to be the one that is going to be for the correlation between the two, or in this case the covariance between the two, between the vaccinated and the unvaccinated. And we could ask for standard, standardized estimates, but we didn't ask for it but we could do that. And the other thing of course is this that we could constrain the intercepts to be same. And so we did that we would get a single intercept for both but we forced it we said that look you know but I mean this is a toy example again I mean if you wanted you could get these two intercepts to be the same, because if there was reason for you to believe that these were kind of kind of kind of some I made sense. And you know that these studies are quite heterogeneous or random effects model probably makes sense in this particular situation. Where it adds values to us is it allows us to examine the individual summary estimates and then join summary estimate. So take a look at this and I'm going to explain that in a bit. If you plot it, you do get to see that you have got two effect sizes. So this one is for the odds of getting tuberculosis for people who are vaccinated, odds of getting tuberculosis for people who are unvaccinated. And what you do get to see here is a joint probability of that. And in each case, of course, they are going to give you a point estimate and, and a diamond, because these are random effects models. And then what you do is this that around this, you get the confidence interval of the joint probability estimate of what is the likelihood that these people are going to get the, or what is the what what are the effect sizes or the joint effect sizes. Of these two things happening to events happening happening simultaneously. And each of these are individual effects, effects of the individual studies, and the large one is the is going to give you a large broad based interval of all the studies that are taken together. Now, the advantage of these, of course, is this over a over univariate metanalysis is clearly apparent because now you know what is the correlation between these two effects, positive or the negative, or in which direction are they moving. So, here, for example, you do get to see that the effect size of one is like around. Yeah, you can see the two effect sizes of these two. The joint effect sizes of these two in this in this in the in the center diamond that you do get to see that are being reported. Okay, again, I've been too fast. Let me stop and take questions from you. So what questions do you have, what questions can I answer. So we've got two outcomes here. Two outcomes. Yeah, two outcomes that are correlated with each other. And those correlations are given, given the form of a towel that you specify as a correlation in the input matrix. Okay, and in the code, how was the bit we specify the actual model structure I couldn't see that. Okay, so what has happened, this is the this is the one that I talked to you about the black box nature of meta. So what happens in meta is if you see bind if you if you bind the, the two why is that is the two point estimates, and then you bind the variance covariance the variance. Yeah, yeah, this and then you indicate that I want a random effects model, it will assess it will estimate using using a simulation model. So that's the black box bit of it. But if you want I can share you with the detailed algorithm that it goes through to do that. Unfortunately, this is the bit that we can't do it do in LaVanne, you can do that in M plus, but not in LaVanne. Because what they do here is this that they do what is called a random slope model to assess the level two for the random effects model, but you can't do a random slope analysis in LaVanne yet will probably come sometime. So before we went further further ahead let's have a quick set of questions and answers. I mean I admit that it was a very whirlwind and very quick introduction to a lot of very complex concepts. It probably takes several several hours to unpack all of these things. So for that I definitely would like to apologize that I wanted to went too fast with these things. And there are many many things that I could not cover here, which I will listen to talk to you what we can do going forward if you're interested. What other questions do you have any questions in the chat box you know. Okay, one of the question that came here is this that does random effects model here in reality mean mixed effects. No, a mixed effects model is the one that we were talking about when you add in other variables in addition to the random variables where you're predicting with these other variables, the variation in the random effect size. So that's a mixed effects model. The mixed effects model is not not the same thing as a mixed effects model. So I think that was a question that came up in the Google Docs was any question in the chat box. Can you think of something I couldn't see anything in the chat box. Okay, so. So again I mean so it's a it's it's kind of a whirlwind tour of metanalysis and I barely stretch that structure actually modeling here. So basically the idea is this that a structural equation modeling can be used to simplify. Simplify metanalysis concept to some extent so it kind of unifies the concepts of random effects metanalysis it, it, it layers it tells you the levels of metanalysis as three to one. Using a structural equation modeling will enable you to do metanalysis in ways that conventional packages but not always let you do that kind of stuff. Although, if you use metaphor, you will be able to do the multi level modeling, pretty similar, but it gives you a more graphical way of looking at this world. There is another one that we have not covered here it's called meta analytic structure equation modeling what that does is of course take correlation matrices, put them together and then reanalyze them. And we also have not shown you the three level meta analysis which means that you know what are the other variables that you could bring in, and then build your meta analysis but you can probably understand at that point. All you can do is you add additional variables to this random random legend variable, and then, you know, fit the model to see how they would stack up this once. So, there are quite a few things to explore so what I've done basically is this that I've set up a GitHub repo for this workshop where I will be populating with more codes and more stuff. So, if you're interested, please, you know, fork it or, you know, if you like it you can you can fork this and work with this, you can fork this repo and and and use it. You can fork this repo you can start this you whatever you want to do with it. And if you wanted to add some stuff to it, it's good you can add pull requests and we can build quite a community around that to get some work done. I think I believe myself there and leave room for questions and your final observations and comments and everything else. There's something on chat as well as that. Thank you for putting the link to the repo there for you. Thank you so much. That was brilliant. A lot to think about. I can already see where I can, I can use in my work where we have lots of things with quality outcomes. Be quite useful. Are there any any questions from the floor from any Ross. I'll start I guess and just, I was just wondering to one of the things you sometimes do with structural equation modeling is combine measured variables into one latent variable. You, I guess you can do that here. Definitely because, well, you are doing it in a way because what you're doing here is this that you are particularly with multivariable analysis where you are separating out two latent variables. Yeah, and you're regressing them on on the manifest variables. Yeah. And so if you wanted to have more of these things, then they would have to be entered somewhere here so which means that hierarchical latent variables come into and then they, then you can use them. Yeah, yeah. I guess you can make it as complex. Sorry. Like, could this be used for something like where we have an outcome, which is measured in lots of different ways. Yeah, I guess that. Exactly. So instead of in some of this manifest variable, which is a very simplistic manifest variable here, you could have another latent variable with its own manifest variables that is going to make measurements. Yeah, yeah, of course. Yeah, yeah. There's no into that. I mean, the real critical part here is this random effects variables that you set up. What happens here is up to you. Yeah, right. Thank you for that. Excellent. Yeah. And does it deal well with like missing data. I think you remember that that being a bit of an issue. Yes, I sorry. Yes, I that's the other thing that I did not talk to you about the missing completely at random versus missing at random. And so what happens if you do two separate univariate analysis, you're using things like missing completely at random. And so you are losing your data sets, whereas here with multivariate data analysis, you can take a good look at both of these things together, because you could, you could miss things at random standards. Yeah, hi. So it's a new question. So if you're not finished with the answer, you can do that. Yeah. I asked the question earlier about piece by Sam. I was referring to this specific our package on crime. I ended up using this one years ago. One reason was that in ecology where work. Nothing is ever linear. No, right. These Sam tools rely on the correlation or what's called co variation matrix as soon as linearity, unless you add some sort of squared variables in there or do some other trickery. It's not straightforward at least to do to do nonlinear regression. So how do you approach that problem. Oh, okay. I think in that case you could, there's several ways of addressing that one of the things I think probably most intuitive is to use your covariance covariance matrix, and then synthesize them together. If you wanted to go the same way. Style of analysis is that take the variance covariance matrices of each of these individual studies, if you can get them, and then synthesize them together. And so you get a synthesized variance covariance matrix which you then analyze. That's the one that I didn't cover here. But for nonlinear regressions that would be a preferred way to get, get these things done. Other than, you know, the ones that we are doing here. Here it's more like, you know, traditional metanalysis to variables taken together and working it out. Yep. Thank you very much. And also I remember working with that. It was much more familiar for as you work with mixed models in our, it's just the same sort of coding and you don't have to know what they're late in the random effect or land them. What's it called again. Yeah, you don't have to learn the new terminology. It's just a mix of things together. Yeah. So basically what happens is this that in summary then is your meta analysis is by nature multi level to level meta analysis is a special instantiation of three level meta analysis. And you can do structural equation modeling for multi level modeling. So if you combine the two, you get this pathway of structural equation modeling doing that answers. That's all the thing is, I think the real In my experience, the real attraction of ACM is in multi varied meta analysis and enabling me to do what is called metallic structural equation modeling that is, I take correlation matrices put them together and reanalyze them using ACM, which is going to be quite useful for, you know, many disciplines such as environmental sciences, you know, ecology works as health sciences to some extent, behavioral sciences, and so on so forth. Medicine benefits a lot from traditional meta analysis. But these guys are quite different. Once when they have got particular heterogeneous heterogeneity is an issue. Question in the chat and it to leave now. Thank you for the wonderful talk. Thank you very much. Please keep in touch. Okay. Now, how much this differs from network method now because a lot of the concepts and the issues are solving sounds similar. Is it just that it has more of a structure you can specify. It's a more of a structure that you specify to conduct the meta analysis network of course is a different. It's a different tool set for doing different things. Making multiple comparison across the board. But I don't know the correct answer whether you can use ACM for network meta analysis that's something that I will need to study more on this to answer this correctly. Okay, so I think that's probably all the questions for today. We just to thank you again for really workshop. I really enjoyed that. And then say thank you for everyone for attending and for watching on YouTube. Or sorry, Anders, did you have one more question? I tried to clap my hands in the rear. Didn't really work. But thank you very much. Thank you everyone and we'll see you all tomorrow in the last day. Also exciting sessions coming up and we'll see you then. Fantastic.