 So now I'd like to welcome today's presenter, Dr. Megan O'Connell is an Associate Professor of Psychology at the University of Saskatchewan. She practices clinical neuropsychology at the interdisciplinary Role in Remote Memory Clinic. Dr. O'Connell is the lead on remote specialist to primary care support in rich care through technology or respect one of three projects recently funded by the CIA HR Foundation granted Dr. Deborah Morgan. She's also PI on the Video Therapy Analysis Lab or VITAL funded by CFI. Co-lead of Role Remote Indigenous Technology Needs Exploration and part of CCNA's Issues in Dimension Care for Role in Indigenous Populations. Finally, Dr. O'Connell is a member of the Ecology Working Group in the CLA. She is currently working with the team in developing normative papers for the neuropsychology batteries and psychometric properties of the depression scale. So thank you again Dr. O'Connell for joining us and I will pass along control of the webinar series to you. Perfect. Welcome everyone and thank you very much Dr. Vassim for that introduction. Thank you. I can hear you just great. Okay. Great. Thank you. That's very reassuring to have. So this is not the area I tend to practice in but I do have as a somebody with training in comical neuropsychology measurement theory is something we're rather obsessed by. So that's partially why I got myself involved in looking at some of the psychometric properties of the CESD-10 which is the depression scale that is used in the CLSA. And I have no conflicts of interest to disclose. I just want to briefly go over what the CESD-10 is just to orient you to it, whether or not you've started to analyze it. I also want to, before I get much further acknowledged that I'm part of a team who's working on this. So although I've done confirmatory factor analysis and multi-group confirmatory factor analysis before, I didn't want to relearn that program again. So I asked a colleague who tends to use EQS to do the analyses with me and his name is Peter Grant and he's the one who did these analyses which is great. And one of the things that will come out today and in fact in preparing for this talk we realized although we thought we were done our analyses, we realized we're not quite done. And I'll tell you a bit about why. So this is some work in progress. So before I get into what we did and why I think we have more work to do, I'll orient you to what the CESD-10 is. So as it was mentioned in the title, it stands for the Center for Epidemiological Studies Depression Scale. This is the 10-item scale. It was derived from a longer 20-item scale. And there have been numerous evidence or data to support its validity in terms of classification accuracy with conically diagnosed major depressive disorder. So this is a screening test and it asks 10 questions. One is how often were you bothered by things that don't usually bother you? How often did you have trouble keeping your mind on what you were doing? How often did you feel depressed? How often did you feel that everything you did was an effort? And how often did you feel hopeful about the future? And as you can tell from the scoring it should be reverse scored. How often did you feel fearful or tearful? How often was your sleep restless? How often were you happy again, an item that needs to be reverse scored? And how often did you feel lonely? And finally, how often did you feel you could not get going? And each of these items is scored on a four-point response key, which is all of the time. So do you feel depressed all of the time, for instance, or occasionally, or some of the time, or rarely or never? And of course, we have the response options for people who do not know and refuse to answer. And typically, the longer 20 item scale from which this was derived and initially described by Anderson and all in 1994, typically with this 10 item scale, not surprisingly, it tends to have fewer factors. And the most possible or proposed factor structure for the CESD 10 is a two-factor structure with a depressed affect factor, which contains eight of the 10 items, and a positive affect factor, which contains two items. And I'll talk a little bit more about that as well. There are some controversies, much like most research. The more you get into something, the more murky data become. But there have been controversies regarding the factor structure. Certainly, the long history of controversies and the factor structure of a longer test score, which was not there for surprising, that there are controversies in the 10 item scale as well. Typically, however, if you look at the literature on 10 item scale, there tend to be most authors finding two factors, although many authors have found one. And on occasion, more recently, some authors have found three. And part of this controversy relates to what potentially is contained within these factor structures, particularly the fact that the second factor structure, lack of positive affect, happens to contain the two items that are reverse scored. So some have postulated that that is really what's underlying the potential separability of these two. They really are not separate constructs. And it's just the fact that they're reverse scored, and they should really be used as a data quality check. Others, however, argue that the lack of positive affect is really quite distinct from the depressed or negative affect structure. And for those of you familiar with the diagnostic criteria for major depressive disorder or major depressive episode, for that matter, there really are two separate potential factors, one being sad mood, and the other being lack of positive mood or lack of interest, also known as anhedonia. So others argue that, really, depressed affect does always consist of negative affect. But some also argue that the lack of positive affect or anhedonia does not always include aspects of negative affect. So they really potentially, in all likelihood, are quite separable. But I will say that that is controversial in the literature. But that's not the focus of our work. Our work is really looking at the further controversies regarding measurement in whether or not the CESD has a similar factor structure for particular subgroups or participants. And there are too many data to summarize here, so I just picked a few examples to talk about. Buran et al. in 2017 found a two-factor structure for participants who are of Zulu and Zosa background, but a one-factor structure for Afrikaans and participants in South Africa. And we have Lee and Chalcoran, on the other hand, in 2008, finding that the two factors were invariant to males and females in a sample of Singaporean older adults. And the longer the 20-item factor structure was found by O'Rourke in 2003 to be invariant to English and French, which is relevant to the CLSA data. So as you can see, even just from this little snippet of the studies on whether or not there are one versus two-factor structures for certain subgroups, there are quite a few controversies in the literature. And thus far, no one has examined the factor structure or consistency for subgroups based on English versus French. That has not been done for the 10-item, only for the 20-item. And the question, of course, is why should you care? Why does this matter? And why this matters is because you need to understand the factor structure of the CESD to inform you about how you can use this scale. And these are potentially separable aspects of depressed mood. That is, negative affect and lack of positive affect, which is quite important. And if measurements and invariance is established, then you can make inferences that the observed mean differences can be attributed to the differences in this true or underlying construct of depressed mood. So maybe let me go back and talk a little bit about why factors matter when measuring psychological variables. And factors matter when measuring psychological variables because we don't get to directly observe psychological states. We can't, unlike height, we can't take a measuring stick and observe somebody's level of depressed mood, for example. And because of that, we have to make inferences about what their depressed mood might be from what we can observe. So we have unobserved variables, such as psychological state is an unobserved variable or latent variable. That is inferred by observing the variables we see, such as their responses on questionnaire, such as the CESD. And the question is, we'll put another way for the multi-group factorial or measurement invariance, is our responses on the discretion scale, excuse me, driven by the same underlying variable of depression, which is the basis for factor analysis. And more importantly, as the basis for the current investigation, are these underlying variables the same for all who respond to the scale? That is, is there evidence for factorial invariance? And that is what we looked at. We'll talk a little bit about what factorial invariance is. Factorial invariance can be explored, or measurement invariance can be explored with multiple methods. Factor structure, I think, makes good sense for psychological variables because of the latent nature of measurement when you're looking at unobserved variables. And it's typically done, at least when using factor analysis, with something called multi-group contermitory factor analysis, also MGCFA, which is a special case of structural occasion modeling. And if you have the same level of the latent variable, and in this case, depression, do you have the same score on the measure, which is essentially what this is looking at? And if the measure has evidence for measurement invariance, then any group mean differences that you do see subsequently can be attributed to differences between these groups in the underlying construct. I'll talk a little bit more about that for a moment and a second. So this is why this research, although I find it exciting, may not be exciting at all, is quite important, is if you're looking at, say, an outcome study for where the CESD-10 is a outcome measure, for instance, and you want to make assumptions about differences between groups, whether that be a treatment group versus control group or groups based on something else, the question is, can you make that assumption? Because is the scale biased toward one group or not? And that's essentially the aspect or the thing that measurement invariance is very interested in. Are there other systematic sources of error that impact measurement on the scale? And if not, then you can make inferences about what the scale actually says in between group for group differences. So let me give you a sense of what we did, and I'll go a little bit more into that once we look at some of the data. So we used the CLSA data from the tracking cohort for those who had complete data for the CESD, so no missing variables. And as you can see, it's a rather large sample. So a few of the tracking data cohort did have missing data for this. But you can also see, which is not surprising, that it's quite positively screwed. And what that means is, you can see the scale here. You can see that most of the population who was sampled with the tracking cohort has very relatively few symptoms of depressed mood, as reported by the CESD, but there are a few persons within the sample who do report rather substantial symptoms of depressed mood on the CESD. And given that the CLSA was not sampling for particular clinical groups, in specifically groups based on mood disorders, this distribution is quite expected. That most of the sample will report few symptoms, but there is going to be a substantial portion who do report some, because major depressive disorder is a fairly common disorder. The first step when we're doing this sort of work is to first do a confirmatory factor analysis with no constraints. So the no constraints refers to how the steps used to look at factorial invariance. And the first model is the model or the baseline model that we compare all subsequent models to. And because these factors are quite correlated, we used an oblique rotation. And here's just an example. I know it's fairly hard to see, but just an example of what kind of a factor analysis diagram would look like. You can see that each of these yellow boxes represents one of the items on the CESD, and this is the negative affect factor, and this is the second factor with the two items loading on the lack of positive affect factor. And then we have factor loadings here, and we also have error in measurement. And that's what I wanna explain about that. So there are numerous methods for trying to determine whether or not this factor structure fits well with the data. And we used a couple of different measures. Chi-square, although useful as an index of fit, is very sensitive to large sample sizes. The CFI, or the comparative fit index, has been adjusted so that it's less driven by these large sample sizes, so it's recommended to be used. And anything under 0.95 for CFI most would suggest is good fit. And then we have the root mean square of approximation, which is another method of fit in the data. And although I recognize there's some controversy, some say under 0.06, so just a good fit, and others recommend under 0.05, but I think given most of the other fit indices, this is not the two factor structure is a adequate fit to the data. And as I said, the two factors were quite highly correlated. So let's get into what we do when we're looking for factorial invariance. And what you do is you look at particular subgroups. So we have to take our data and we look at variables that might impact how people respond to this depression scale. And given the literature in the area, there's some suggestion, although most I would suggest of the data, suggesting variance to gender, conceptualized in a bimodal manner, or dichotomous manner here. And, but we did explore, of course, whether males and females in CLSA responded to the CESD in similar ways, whether they're latent factors were similar. Whether younger or older adults, I mean that is taking our continuous, well relatively continuous, our variable of age and taking it and making it arbitrary cutoff between younger and older based on age 65, where their differences in how these groups responded to these latent factors. English and French, as you can see, there's a larger proportion of the sample who was English speaking or who filled this out in English, but there's rather large group of those who are French as well. Does the CESD measure the same thing in English versus French? And then we looked at, just because of the numerous data on differences in factor structure for different ethnic groups around the world, we looked at ethnicity. And this was based on the question in CLSA asking about where, which ethnic groups to one's ancestors belong to. And I have to say this, trying to create a group of ethnicity, cosmic weeks of stress. And it's in part because you can see there are in the sample anyway, a large proportion of the sample in CLSA report Western European ancestry. And that is they may report another ancestry of course, because like most of us, we have multiple sources of our ancestry. So what we did was if they reported a Western European ancestry, that took precedent and then there was a group of other, which included numerous other groups. And then fortunately, we thought that this was the most defendable when going for research or when publishing this because, partially because of the very dismal sample sizes if we were to compare it with say other groups, but how to categorize and code these variables. I think this was the most stressful was trying to figure out how to do this. But I think we decided Western European ancestry if you reported that, even though many reported that combined with other ancestry and then people who reported other ancestry. And we also looked at whether or not people have reported that a doctor had told them they have problems with anxiety. And we looked at whether or not people reported that a doctor had told them they have a memory problem to see if there are differences within these two subgroups as well. So getting into kind of what this is that a factorial invariance does. And the most intuitive one is the lowest level of evidence for factorial invariance. And that's called a configurable invariance. And that question is for each of these subgroups, and we'll use French and English as an example, for each of these subgroups, are there an equal number of factors? And for the CESD 10, that would be a two-factor structure. So there is the two-factor structure similar across subgroups. And the next level then once that is established is whether or not there's evidence for weak invariance. And that is a question of whether or not there are equal number of factors, but also are the factor loadings the same for each of these subgroups? That is, are the items contributing to the latent variable the same for each of these subgroups? And then the next level of factorial invariance concerns itself with something called strong invariance is that in addition, are the factor means the same? So this is where the intercepts are constrained to be the same across subgroups. And then there's strict invariance, which is in addition to all the previous ones, configurable, weak and strong invariance, is there evidence that the error and estimation is the same? That is, are the residuals the same? So to do each level of factorial invariance, you first constrain these variables. So for configurable invariance, you constrain the number of factors to be two, because that's what the baseline model had. And you constrain that for each of the subgroups, for example, French and English, and you look to see whether or not the model fit for each of these subgroups remains similarly good to the unconstrained or baseline model. Subsequently, same thing for weak invariance. Not only do you constrain the number of factors for each of these subgroups, but you also now constrain the factor loading for each of the subgroups. And then look at or compare the goodness of fit of the factor analytic model for each of these subgroups with that baseline unconstrained model. And I'll talk a little bit more about what all of these mean as we go in. So I was gonna remind you of that with that, but that's okay. So, so far we've looked at our data with these particular subgroups, and we can say yes. For each of the groups we considered, there are two factors. There are two factors for younger versus older participants, male versus female, English versus French, those with Western European background versus the group of other backgrounds, those with anxiety history versus none, and those with a history of memory or being told to have memory problems versus none. And following Hirschfield and von Brachels guidelines, we focus predominantly on the change in the comparative fit index or CFI as I said, partially because of the sensitivity of the CHI to sample size. Then when we've done so far as we've looked at weak invariance, so the next step was not only just fixing two factors for each of the subgroups, but it's now fixing the factor loading, so how much each item loads onto each factor. And if we constrain the factor loadings to be similar for the subgroups, is the model still a good fit? Again, using the change or Delta CFI, we found that yes, it is a good fit for all of these subgroups. So that suggests that the factor structure, at least at this level, is the same for all of the subgroups we've explored thus far. And what does this mean? This means that the model, the factor structure model, is good across the subgroup analyses when you impose this constraint that the factor loadings for the items and the factors have to be equal in the two subgroups. And essentially, this means that measuring the same factors with the same factor loadings, which is pretty good, pretty good evidence for in-factor all invariance. And in fact, many would argue that it's sufficient. And I'd say that this is as far as we thought we were going to go with these analyses because this is as far as I've gone in the pack, in the past, because we... This is typically, in fact, a recent review and as I, to my next slide, typically what is done. This is as typically as far as most people decide that there's sufficient evidence for this. And it's because you know that one unit change on the item score is scaled to an equal unit change on the factor score across groups. So that's fairly good evidence that you know what you're measuring. And really, again, it's the unit of measurement of the latent variable is identical across groups. So intuitively it makes sense that many have said that that is sufficient. And in fact, most have reported weak invariance as sufficient for measurement invariance. I know it's a 2000 review of this. However, I'd say others have argued that strong and strict invariance is really needed to be able to speak to measurement invariance. And I'll talk about a bit why. So I had to convince myself of this and my collaborator who's doing the analyses of this, I'll convince you of this. The case for strong invariance, it relates to thus far with weak invariance, we've just constrained the number of factors to be the same and the factor loadings to be the same. With strong invariance, the next step is to constrain the factor means to be the same, which is also the intercepts. And if they are an equal factor means for groups, this could represent bias. For instance, if the means for groups who are English speaking are the same or different for the means for the groups who are French speaking, then this could suggest there's a bias in the scale that could be related to the translation into French, for example. So really what this would say is if their strong invariance is not established, that could mean that the score on that factor, the score on negative affect, for instance, could depend on whether or not you were English speaking or French speaking, i.e. depending on group membership, which is a problem. And strong invariance or establishing strong invariance really is making a case that there is no systematic response bias. And it's really needed if you want to compare the means on these latent variables. And in this case, that would be the variable of positive F or lack of positive affect and negative affect, if you wanted to compare those across groups. And that's an important step, I think. I will say, however, that on the case of strict invariance, which is the most challenging one to establish, makes a lot of sense to me, and I'll hopefully convince you of the same. So just as a reminder, the strict invariance is now constraining everything we've talked about earlier, constraining two factors, constraining the factor loadings, constraining the means for subgroups on these latent factors. But now, in addition to that, what you need to do to establish strict invariance is constrain the residuals or the errors. So you need to constrain the variance into each test item that is not accounted for by the latent factor. And if you fix the residuals to be equal, and if you found that there are, and there's controversy in the literature on this, so some would say you don't need to fix the residuals to be equal because if the residuals aren't inter-correlated, you can assume that they're just comprised of random error. And of course, that's the challenge is most residuals are correlated. Therefore, this argument doesn't need to hold a lot of weight. You really need to actually constrain the residuals and see if the model fit is still good. If they are correlated, as I said, and there are differences across subgroups, this means that different variables, something unmeasured, and that's therefore captured in the residuals, is operating on these measures across groups or the same set of variables operates differently across groups. So essentially, when you are looking at variance accounted for, and in the factor and analytics model, what isn't accounted for is left in these residuals or error. And what strict invariance is looking for is whether or not there are systematic differences in these residuals across subgroups, for instance, for those who are English speaking versus those who are French speaking with the translation in the CESD. So some have argued that this is unduly challenging to meet, but as I said, I've been reading more and I think I would like to CES make a case for strict and strong invariance with the CESD. We've thus far made a case for weak invariance, and some would argue that that is enough and we don't need to do any more. There are two factors, but if you're trying to make inferences based on certain groups or people who are included in subsequent analyses, it's much stronger to be able to say that they really do measure these same latent variables in the same way and are not missing variables or error in a systematic manner. And I think that makes a much stronger case. So we have yet to do that. So that's where I said this is a work in progress. So we next, we need to constrain the factor means and establish or explore whether or not there's evidence for strong invariance. And then finally, we need to constrain the residuals and explore the evidence for strict invariance. And then one of the other future planned analyses when we had kind of started this a few years ago or at least proposed this a few years ago, was to look at whether or not those who have cognitive impairment would be responding to the CESD10 in the same way as those who do not have cognitive impairment. So thus far, we were only able to compare those who have, who self-reported that a physician had suggested they have memory concerns. But we actually do in CLSA, one of the beauties of this database is the linkages across numerous variables. We do have cognitive status. We are, I'm part of a team that's led by Holly Tuko developing normative data for the telephone administered cognitive tests as part of tracking cohort. And once we've done that, then we can begin and create a variable of cognitive status. So are you cognitively impaired or not? And that really, that determination or that inference of those scores really depends on the creation of normative data and guides for interpreting across the small battery of tests for impaired scores. So that work is ongoing. So we don't yet have that variable, but we do promise we will create that variable and then look at whether or not cognitive status or group subgroups based on cognitive status influence the CESD. Yeah. So I guess I'm seven minutes faster than I intended to be, but that's okay, more time for questions. I'd like to thank my collaborator who did do these analyses in AQS, which is a program I am less familiar with Dr. Peter Grant. I'd also like to thank my research associate for literature review support. And CLSA was very kind in sharing data used for psychometric studies. And because of that, doctors right now, Wilson and Kirkland are thanked and will be also co-authors on this psychometric project once we get it done. So as I said, we've started this, we've gotten into this and realized that we'd like to establish higher quality evidence for measurements and variants. So we'll be continuing the analyses in July and also once the cognitive variables are at the point where we can create that variable, we'll be looking at measurement and variants for those who have cognitive impairment versus those who don't. So thank you. Well, thank you, Dr. O'Connell. It was a very good presentation, a nice overview of one of our data sets. I'd like now like to open it up for questions, just a reminder to everyone that muted will remain on, but you can enter your questions into the chat window at the bottom left corner of the web exit window and I'll read them out. We have plenty of time for question and answer session here, so go ahead and feel free to write in your questions. We'll go ahead and get started. So we have one question that talked about how does the first question from the CESD, it was basically your first slide, work with people who are chronically depressed and often feel bothered by things? That is actually, thank you for that question, that's a great question. One of the subgroups that we would like to finish doing the analyses on, and we did start, but I couldn't, my colleague is away in the UK right now so I couldn't ask him about one of his findings, is those who have a history of depression? So again, the question is, a self-report has a physician or doctor ever told you you have problems with depressed mood? And we do have those data and we can look at factorial invariance for that. And in fact, if you look at the literature on the CESD10 and the literature that finds evidence for one factor versus two, tends to be more likely in psychiatric samples. Now we don't have a psychiatric sample here, so we can't directly compare it to those data, but it certainly is an interesting question because it would make sense that particularly for those currently within a major depressive episode that their responses to this and the underlying factors would be possibly quite different. But that's not taken into account for the scoring of what's used in the CLC? No, we don't have a clinical diagnosis of depression. So we have self-reports. So do you have, has a physician ever told you you have symptoms of depression? We don't have somebody going in and doing a structured clinical interview to actually support that with the gold standard method to determine whether or not they have depression. So I can, we have done those analyses for depressed mood. There does look like, and again, that's not depressed mood. It's the self-report of history of depressed mood. And it does look like there may be some differences in measurement variance for that, but we need to dig deeper into that. But it's not gold standard. This is a depression. So you can't use this to quantify rates of illnesses. There's no cut-off values being validated or researched. Oh, okay, so not in this particular project. In the past, though, the classification accuracy of the CESD-10, I think this relates to another question. I don't mean to look ahead, but I can't help it. The classification rates, again, established particularly for the CESD-20, the gold standard is a clinical interview. So this is a cognitive screening, I'm sorry, a clinical screening test. A clinical interview is a gold standard. And even within that, typically, they use structured clinical interviews like the Hamilton Depression Rating Scale. And no, that hasn't been done because we do not have a gold standard in the sample. We can look at whether or not suggested cut-offs do tend to capture these groups, but that's not the... Again, it's not the same quality of data as the original data that was used to establish these screening measures, which is to look at whether or not these self-reported screening measures correspond with the gold standard and clinical diagnosis. You're using this information to quantify at-risk groups or high-risk groups within the set or levels of symptomology related to the symptoms? Is this precious? Yeah, yeah. So screening tests are very useful because they can be administered easily, especially to large samples. But we always have to remember that the screening tests, I mean, the original CESD has fairly good sensitivity, specificity, for instance, as does the short form and it's concordance with the longer form CESD. But again, there is error in measurement because it's a screening test. Which I think touched upon another question that we have on the board. Okay, moving on to another question. Was the CESD translated into French for French-speaking participants? Yeah. And assuming yes, was the translation used that was validated in other studies? Yeah, that's a very good question. Let me take a quick look. I am looking through the Psychology Working Group's notes from the initial. I do not see reference to French. So this is a question. I guess I should probably say I don't know would be a simple answer to that. But I do, as I said, I do have the Working Group notes here and I think if it's quite possible that it hadn't been supported in its translation to French, which I think is all the more reason why this work on invariance is important. Certainly. Yeah, so, but I'm just looking through and I know that was one of the goals with all of the skills or was there evidence for it in English and in French and that's not common or easy to find. You may have touched upon this in our previous discussion of the first couple of questions but do life events, recent negative life events, how do these come into play with how you're analyzing this data? That's actually not a variable I requested when looking for the factor of invariance. So it's not even one I'd considered. So recent marital changes, death in the families, retirement, transition. That's actually a really good question. Maybe I should ask for the data again. Oh, it's easy to come up with more things to look at. All right, another question. What is the advantage of using the CESD instead of another depression rating scale such as a Beck-depressant inventory, the Hamilton depression rating scale, et cetera? What situation do you see the CESD primarily being used in? Well, the back is a great scale. There are some concerns about its reliance and somatic symptoms, however. And it also costs a lot of money to use. Each, it's copyrighted. So every time you use the BDI, it's cost money. So I think those are the kind of factors when you're looking at giving it to 21,000 Canadians, that's concern. It also, particularly because of its reliance on cognitive and somatic symptoms, it isn't typically used across the lifespan. So it does have some disadvantages in its use. The Hamilton depression rating scale is supposed to come from a clinical interview and is therefore not feasible in a larger sample. I quite like, I actually, in my clinical data, my clinical practice, I augment what we get from clinical interview with the longer version of the CESD. And actually, I think I'm going to now move to the shorter version just based on this work. I find it quite useful to even augment what I get clinically. I like to get at information from multiple methods, particularly with interview and information, and we tend to have a highly medicalized setting, and it can be challenging to answer questions if you're not used to talking about private issues, such as mood. So I like to get at the same construct using different methods. So I do, as I said, even use the CESD myself in clinical practice. But I think those are some of the reasons why, and I can't speak to that because the original working group was meeting back in 2007 and I wasn't a member yet, so I'm still a grad student. So I think that those are some of the factors that are involved in these decisions. But I know particularly with the sample I use, I don't use the BDI. I certainly could pay for it. It wouldn't be a concern in my clinical setting, but it is fairly heavily weighted to the cognitive and the vegetative or somatic symptoms. And then finally, do you see any predictive abilities or do you have any future kind of thoughts on the longitudinal nature of kind of using this in the future? Is there anything in the literature to help think through future work on it? Well, I think this work, although, as I said, may not be exciting to some, is really important for how we can use the scale in any longitudinal studies. So looking at or establishing or getting evidence from it, and if we do have evidence that there is invariance, then it allows other researchers to use the depression scale longitudinally and have much more faith that it's not biased against any certain groups or is essentially in a positive manner, is measuring the same thing in the same way. So I think it's just one of these foundational, this is a foundational piece of work that really influences how this can be used longitudinally, more importantly for inferences of test scores. Certainly. Well, if we don't have any other questions, I'll thank you again, Dr. O'Connell, for your very nice presentation. We appreciate your participation and your work in the CLSA webinar series and using the CLSA dataset. Awesome, wonderful. Thank you very much. I'd like to remind everyone that CLSA data access request applications are ongoing. The next deadline for applications is October 16th, 2017. Please visit the CLSA website under data access to review available data, further information and details about the application process. Please check the CLSA website for dates for our next webinar. We'll be taking a July and August summer hiatus and starting again in September. The website is linked to the right of your webinar screen and I'd like to thank everybody here for joining us. Thank you.