 Welcome to today's presenter. Dr. Holly Tuco is a professor in the Department of Psychology and a research affiliate of the Institute on Aging and Lifelong Health at the University of Victoria. From 2009 to 2014 she was the director of the Center on Aging at the University. She's a clinical neuropsychologist and prior to joining the faculty at the University of Victoria was the supervising psychologist at the clinic for Alzheimer's disease and related disorders at UBC Hospital. She's also worked on a geriatric mental health outreach team in the capital regional district of BC. Dr. Tuco has been involved with the CLSA since its inception in 2001 and was a psychological health team leader until 2017. So at this time I'd like to welcome Dr. Holly Tuco and pass the presentation on to her. Welcome Dr. Tuco. Hello and thank you Katherine. I guess we'll just get started here. I have to remember how to do this now. Here we go. Okay so I've been asked to speak to you today about a project we're undertaking, developing normative data and comparison standards for the cognition measures employed in the CLSA. So the purpose of today's session is to provide you with the snapshot of the procedures that we're using to develop these comparative standards for the cognition measures and then to solicit your input regarding some tools that are being generated for use by researchers and clinicians with respect to the cognitive data. So this is our team. We are a team across the country. I am the team leader here at the University of Victoria in Victoria, BC. Then we have Dr. Megan O'Connell at the University of Saskatchewan in Saskatoon. Dr. Martine Simard at LaValle University in Quebec City. Vanessa Taller at the University of Ottawa. Lauren Griffith at the McMaster University in Hamilton who is part of the CLSA main office. And then the research team here at the University of Victoria, Stacy Ball, Dr. Helena Cadlock and David Holt. We have funding for this project from the Alzheimer Society of Canada in partnership with the Pacific Alzheimer Research Foundation. The aims of our project are to examine how Canadians typically perform on measures of cognitive functioning, to understand the health and lifestyle factors that affect cognitive functions, to develop sets of normative comparison standards for the measures of cognitive function from the CLSA for French and English-speaking Canadians, to create tools that should say tools for interpretation that can be used to generate classification of individuals for use in research and clinical practice, and then to lay the foundation for refinement of Canadian norms for cognitive measures in French and English as the longitudinal data becomes available from the CLSA. So before we go on, I'd just like to mention why it's important to have Canadian comparison standards for these measures of cognition. First of all, most existing normative standards are based on non-Canadian samples, and this is particularly important for the primarily French-speaking segments of the Canadian population, though some norms are becoming available more recently. But until recently, there's been little comprehensive information for making comparisons when identifying changes in cognitive function. Even for English speakers, relying on data collected elsewhere in the world may not provide the level of sensitivity to change desired within our own healthcare context. Secondly, existing normative standards may be outdated. The collection of data for the creation of normative comparison standards can be quite expensive and time-consuming, and for these reasons, it may not be collected often, and existing data may be out of date and not relevant to the current population. Third, existing normative standards for measures may not cover the full spectrum of ages from midlife to later life. Sometimes the age range of interest may have been restricted and did not cover all age groups. For many years, people over the age of 65 were not included in data sets, or data sets only included people aged 65 and over. This one, that's what the data can tell us about how cognition changes over time. Dr. Chuko, I apologize for the interruption. It sounds like we might be having a little bit of audio difficulty, so I'm wondering if you might be able to increase the volume that you're speaking at? I'm at maximum. Okay. Well, I'll try to speak up. I'll move the mic a little bit closer. Try that. That's great. Thank you. Increase my volume. Thank you. Fourth, because the collection of data for the creation of normative comparison standards can be expensive and time-consuming, small samples have often been used, and it's not possible to examine the effects of many different factors when you have quite small samples. Typically, normative standards have been developed on individual measures, and this is for cost and time reasons, even though these individual measures are often used together. When individual tests are combined in this way, there's an increased likelihood that false positive errors will be made because relations between the measures are not taken into consideration. This means people will be identified as impaired when they're not, and that's quite a costly mistake to be making. So these are the measures that were selected for use in the CLSA. There were a number of selection criteria for the measures, appropriateness to the longitudinal goals of the CLSA, relevance for those between age 45 and 85, so we didn't want to have any floor or feeling effects within those age ranges, the time and cost of administration that they could be available in English and French. The CLSA used performance-based standardized measurement of cognitive functions to provide specific information concerning well-defined, clinically relevant areas of cognitive function. So part of the goal of the CLSA is to look at normal changes as well as potentially identify cognitive impairment. We have a heavy emphasis on executive functioning as it's particularly relevant to those capabilities that enable people to engage in independent, purposive, self-regulatory behaviors and adaptive functioning, again a major aim of the CLSA. Recently we have published this paper that describes the selection and application of the cognition measures in the CLSA. In this paper we examine the CLSA tracking cohort cognitive data and compare it with previous studies that have used similar measures. As you may know, the manner in which a measure was administered in the CLSA may differ from the way it's typically used in clinical practice. For example, the tracking cohort was administered all of the measures over the telephone. Another example is the CLSA's use of the Ray Auditory Verbal Learning Test. In the CLSA we used one learning trial and a five-minute delay, whereas the typical administration involves five learning trials, a distractor task, and a delayed recall trial, often 15 to 30 minutes after the fifth learning trial. So suffice it to say that the CLSA cognitive measures may not be directly comparable to the same measures reported in the literature. In this initial paper we clarify the differences in administration and provided an initial examination of the CLSA data in relation to other studies employing the same or similar measures. These differences in administration are another reason why we felt compelled to provide a comprehensive examination and description of how these measures work within the CLSA. The plan now is for me to walk you through the steps and procedures we followed in examining the CLSA cognition data for our purposes to develop these normative standards. So as you will recall from my earlier slide our first order of business was to examine how Canadians typically perform on measures of cognitive functioning. To do this we chose to select a neurologically healthy or neurotypical sub-sample from the CLSA tracking cohort. So here we have the tracking baseline 21,241 participants and then we went through various steps to delete the participants. So in our neural filter we deleted the people with self-reported following problems. So transient ischemic attacks, ribrovascular accidents, Alzheimer's disease, Parkinson's disease, multiple sclerosis, epilepsy, CNS cancer or concussion. So these are self-report in the tracking data, self-report to questions concerning these conditions. We have selected those people out. So we also selected for people to have all of the measures of cognition. So some people had incomplete data and we also selected out people who had missing education level, missing language, no consent to record the testing session, bad recordings for the ray or the animal fluency or the map, so missing data or bad recordings and this information is available in the data set. So then we ended up with 19,415 neurotypical in our sub-sample and the complete cognitive and demographic data brought us down to 14,855 and then we deleted people who when we say bilingual speakers, that's not people who self-identify as bilingual, it's people who switch languages during the assessment. And the reason we did this is to ensure that the participants were not engaging in a different task than they were given. So when you're switching languages it may actually affect the test performance. So we took out those 745 cases where people were switching language during the testing. And so our final sample who completed all cognitive tests in either English or French was 14,110, 12,350 of those being in English and 1760 in French. So then our next order of business was to understand the health and lifestyle factors that affect cognitive functions. So we looked at a variety of demographic variables that have been identified in the literature as potentially being related to cognitive functions. So we looked at age and some cognitive test scores decreased non-linearly with age in the English sample. We looked at education level which was highly skewed to higher education in the CLSA sample. We looked at sex differences and found no differences there. We looked at language, so English and French, and there were differences on some cognitive test scores as would be expected as some of these measures are in fact language measures and may differ between English and French. We also looked at secondary covariate measures. So these were of health. So we looked at self-rated general health, self-rated mental health. We looked at depression as yes, no on the CESD. We looked at self-rated eyesight and self-rated hearing. So after we accounted for age, education, sex, and language, none of the secondary covariates were related to the cognitive test scores. So then we move on to our analysis of the data. Now that we've selected our sample and in step one is preparing the data and this included splitting the data and checking the assumptions of skewness, linearity, and the effects of weighting. There are weights available and I'll mention that in a minute. Decisions needed to be made concerning how to handle violations of these assumptions and so that's what we were doing in this first preparatory step is identifying any violations and deciding how to deal with them. The skewness issue, because of the size of the sample, we chose not to normalize but to accept the distribution as is. Our comparisons are then based on the actual distributions within the population. With respect to step number two, some measures were linearly associated with age whereas others were not. So we chose to use piecewise regression by age group for the measures having a non-linear relationship with age and then smooth the resulting model that relates to cognitive scores with age prior to obtaining the final cumulative frequency curves. At this point we also considered the need to apply the analytic weights to the data. The analytic weights were designed to estimate the relationships among variables when the sampling design and or responsiveness of the participants may be affected by the variables, for example, age. These weights then account for missing or eliminated data and response bias. We engaged in discussion with CLSA experts concerning the weights so we could understand and appropriately assess their impact for our specific analyses and that's very important when selecting a subsample of the CLSA to determine whether the weights are still appropriate for the subsample that's been selected. So with respect to the regression models, we examined the mean analytic weights for the English and French subsamples and these were close to one. So for the English it was 1.0172 and for the French it was 0.9879 and this suggests that our two language CLSA subsamples of neural healthy participants from the tracking data are representative for the CLSA. In addition, we examined the regression analyses using data unweighted and using the analytic weights and the parameter estimates were almost indistinguishable. So on this basis we determined that using the analytic weights for the regression analyses was not required. So step three in our procedure we wanted to our comparison standards to be reflective of the entire subset of the Canadian population represented by the CLSA subsample. So that then would be neural healthy non-original men and women aged 45 to 85 speaking English or French. So in addition to the analytic weights the CLSA also provides inflation weights I believe they're called trim weights in the data set as a handful of cases with extreme weights were eliminated from the data set. So the inflation weights then take into account the number of people of each sex age group education level so low versus non-low and province by using the number of eligible people in the ministry registries for the 10 participating provinces. In this step our step three we applied the inflation weights to the cumulative frequency distributions of the Zed residual scores within each subgroup taking age in years into account. In doing this step we compared the unweighted and the weighted frequency distributions using two statistical tests the one sample vodka test the Kolomogorov-Smirnov test and the Chi-square goodness of fit test we also examined for differences in score values at the second percentile the fifth percentile and the sixteenth percentile as these are clinically relevant for identifying impairment depending on the cognitive cognitive measures these differences were minor for example on the ray took quite substantial on the map the mental alternation test for this reason we chose the weighted cumulative frequency distribution for all cognitive measures so if we had not used the inflation weights certainly the mat we would be making some quite substantial errors in identifying people at the lower end of the distribution now we move on to step four and so in step four we're now assessing the performance on multiple tests using a battery approach and here's where we can do better than just looking at individual measures we can look at them in combination and provide information about the performance of people if they do two tests three tests or all four tests so the issue here with multiple tests is when a clinician assesses a client on more than one test the probability that the client will fall in an atypical range on at least one of those tests increases only because there are multiple tests we want to avoid misidentifying people simply on chance occurrences so with this data we could then control for that in this step four so that covers our procedures that we've used and we have done this for the tracking data the the cognitive measures in the tracking data and we're now moving on to do this same thing the same procedure the decisions might be different but the procedures will be the same for the comprehensive cohort as you will know in the comprehensive cohort there are more cognitive measures than than in the tracking cohort so we're working on that now then our next order of business from our original slide there was to create tools for interpretation that can be used to generate classification of individuals for use in research and clinical practice as we're still in the developmental process for these tools we certainly welcome any feedback you may have concerning the potential uses for these tools so at the moment these are proposed tools and we certainly want to develop things that people would actually use so we appreciate your feedback if you would like to provide us with that so the first set of tools that we are considering developing are derived variables that would be available in the CLSA data set so the percentile rank for each cognitive measure completed will be available for each participant for the battery approach the derived variable will be an impaired not impaired and this will be dependent on the number of cognitive measures that were completed our analyses we've done with people who had all four measures but we can also provide that information as a derived variable for those participants who only completed one two or three of the cognitive measures so that's what we're considering for the moment with respect to derived variables available in the data set so what this would look like is there would be a raw score for let's say the race trial one and then there'd be the percentile score in relation to these neurotypical normative data and then one could select their own level to identify impairment so you could select those less than five percent less than 16 percent less than two percent the researcher or clinician can set their own determination of what they would call impaired the next set of tools or tool that we are developing is a clinical tool and it's a web-based tool for use outside of the clsa data so if a clinician wishes to administer the clsa measures to clients the resulting scores and demographic characteristics can be entered into a web-based tool and it will provide the percentile rank for each measure the clinician is then free to interpret this level of performance as impaired or not according to their own criteria for this to be useful clinicians would need to access would need access to the clsa cognitive measures administration and scoring to be doing it the same way so the web-based tool will also provide an indication of overall performance on the battery so the next few slides I'm going to show you are a mock-up of how this tool would work for the tracking data I'll run through them relatively quickly to give you an idea of what it would look like and the types of information that would be provided by the tool so we plan to provide the same type of information for the comprehensive data and we're still working on that while we're still working on all of it so again we welcome your feedback as to whether you think these tools would be useful so the web-based tool one would go on to where it's stored and there would be a welcome page and on this page we would have more information we'd have a brief description of the measures how they were administered the sample characteristics so that users can determine if these comparison standards are relevant for their purposes then there would be a page where information can be entered so age sex language and education level and again we're using four levels of education so that information for the client would then be entered and then the raw score would be entered on this page for the range for the animal fluency or mental alternation test so there's a range there for animal fluency we actually have the clsa has two different scoring protocols one is a very strict scoring protocol and the other is eating ink both have been used in the literature which is why we provide both of them so people can choose which type of scoring procedure they wish to use and they would enter the raw score and indicate which scoring procedure it was so then the program would provide the user with it would say your client's score compared to whichever language was chosen whichever sex was chosen whichever age was chosen education level and it would provide this graph as well as scores so the graph indicates that the negative blue is or the blue area at the left hand side of the curve is the five percent cutoff the pink dot at numbers at the test score of six there is the client's score so we can see then how the client is performing in relation to a five percent cutoff also where it says x percentile equivalent to t score of y those would be filled in so whatever the percentile rank was for the person would be there and a t score would also be provided to the user this is the five minute delay again same kind of information would be provided animal fluency same type of information would be provided here this is for lenient scoring and then for strict scoring one can score them both ways if one wishes to and then the same type of information for the mental alternation test here we also have provided the same information with a bar indicator we've this type of information is provided in many different ways in the literature and we want to provide what people feel most comfortable with so this is another alternative is to provide a bar graph this purple zone then is the participant's the client's score here and the width of the purple bar indicates the breadth of the percentiles covered by that score so every individual score may cover a number of percentiles given the range of scores that we have for each test so this the width of this purple here would indicate how many percentiles that score is covering so here we have the summary it would the program would provide a summary of the participant's performance on the different measures and over here where it says overall problem on the right hand side where it says overall probability of being below whatever on the four tests the person the person entering the data can put in whatever level they're interested in two percent five percent sixteen percent twenty five percent whatever and it would then give an indicator of the overall probability of being below that on all four tests when all four tests are combined so here we thought using some color coding so if it was less than ten percent it would be yellow whereas if it's over 25 percent it would be green indicating that all things were good to go if it was less than two percent then it would be red so combining the color coding with the values presented so we're very interested to hear everyone's thoughts about the proposed web-based tool whether people would actually use this if this is the right kind of information people would want to get from the tool and so we want to know is it relevant is it user friendly are their preferences so this is for the tools in general are their preferences on the derived variables are their preferences on the look of the web-based tool the distribution graph versus the bar graph those kinds of things and is there an interest in acquisition of the clsa cognitive measures administration and scoring at a small cost for cost recovery we are looking into the possibility of having that available to people who may wish to use the same data collection tools that were used for the clsa so those are all questions we have for people and we'd be happy to get input on that so in the process of conducting this work to develop normative data we unearth other issues that need to be addressed in the context of our project they can also be addressed in other contexts but we certainly found that these were important issues and sometimes were raised by the reviewers we now have our third paper has just been accepted so we have two more papers in addition to the one I talked about earlier and we do get some questions from the reviewers that we hadn't thought about so these are additional things that have come up um and that we wish to look at in the context of our project before we complete it so one of these is examining the relations between the cognitive measures administered in the tracking and the same measures administered in the comprehensive so there are three the the measures from the tracking were also measured in the comprehensive the comprehensive has additional measures but just for those measures that were the same in both um while the administration and scoring of the measures remains the same between the tracking and the comprehensive the samples differ and um knowing how the cognitive measures function for each sample will be important for the use of the derived variables and for the web-based tool so we are planning to take a look at the relation between the cognitive measures that were administered in the tracking and the same measures in the comprehensive another important issue is whether the cognitive measures are assessing the same constructs and functioning in the same way for English and French speaking samples now we've chosen to develop the norms separately but it's important to demonstrate how language of administration may be associated with performance on the cognitive measures if at all certainly in the canadian study of health and aging a study that was done many years ago we did look at the neuropsychological battery between the same battery administered in English and French and there were differences in the constructs being measured that that's quite common in different languages or different cultures that the constructs underlying constructs are actually different so we need to look at that in the clsa data in the context of our project and then since we've developed the norms by selecting the neurologically healthy people it will be important for us to examine the performance of those with neurological who self-reported neurological conditions that may affect cognitive functioning now if that's a form of validity but given that the clsa sample was drawn from community dwelling people and screened for indications of significant cognitive impairment before they took part in the clsa the validity of the norms will be best determined over time so it may not be that we'll see much difference at this point in time but that we anticipate the differences over time so our examination within the baseline data then will service the foundation for this future work to be conducted by others so the purpose just to recap I have zipped through this stuff hopefully not so fast that nobody understood the word I said but the purpose of this session was to provide you with a snapshot of the procedures that are being used to develop the Canadian comparison standards for the clsa cognition measures and to solicit input regarding the tools being generated for use by researchers and clinicians so I hope you found the session informative and we certainly are interested to hear your feedback thank you thank you very much dr. tuko for this excellent presentation I'd like to now open the session up for questions so just a reminder if you have a question to ask you're welcome to do so using the chat function at the bottom right hand corner of the web x window and please be sure to select all participants before you press send so we have received a few questions throughout the session and dr. tuko I will read them to you now and you'll have the opportunity to respond so the first question that we received from Carol says can you talk through how the comparison divisions will affect the outcomes i.e. age sex location education are they standardized to each subgroup and can you compare across these subgroups let's see if I understand the question so we have multiple subgroups broken down by age sex education and we did all of that by English and French so yes there are multiple subgroups the analyses give us the information for that particular subgroup and that is then used to create the percentiles that are used for the derived variables am I answering the right question is that what the question was getting at I'll ask Carol if she wants further clarification to maybe type it in the chat box but maybe for the time being I'll move on to the next question and we can come back to it if we need to the next question was do you have ideas of how to use these measures measures for longitudinal outcomes well the short answer is yes one can use them to look over time at how people are are changing to look at the raw data to see whether whether or not they are changing over time one particular thing is as as people start to self-report more problems we may see that we need to alter the norms because they may have had these cognitive problems at the time but were not identified and to remove those people from the data set and then create what's called robust norms by taking out people who were subclinical when we develop these norms and will become clinically impaired later on in time so that's certainly one thing that could be done and there are lots of other ways that this data that the derived variables could be used going forward to see if let's say the people scoring in the lower end of the distribution not not classified as impaired but are they more likely to decline over time than people higher up in the distribution so there's a number of different things that could be pursued with this data great thank you we have another question from Ashok asking were the normative score was that completed using longitudinal data or cross sectional data can you just confirm that for us cross sectional data we only are using the baseline data at this point okay thank you very much and that's what I'm saying with longitudinal data one could then look to see or remove people who become impaired over time and adjust the norms going forward we are documenting all of our steps in a very comprehensive tech report that's like hundred to pages long so that in the future people could alter use the same procedures to alter the data if more people become impaired and need to be removed from the normative data set thank you very much the follow-up question did the analysis and developing the normative scores account for regional dependencies um the my understanding of the inflation weights is that they do take into consideration province we did not look at individual regions beyond that and we didn't specifically look at regions we wanted to keep our sample as large as possible um and not break it down too much but I do believe the inflation weights take region into consideration okay thank you uh Dr. Chico you mentioned that it is common to see differences in the cognitive battery between English and French administer tests do you have any insights as to why that might be the case um well one of the reasons is uh for example let's just take the word of verbal fluency as an example there we're asking people to report to say words generate words beginning with a particular letter of the alphabet we know that there are a different number of words available to people whether it's in English or in French so right there you've got a difference in your baseline number of words available to the person to generate and so it tends to be those types of measures that differ between people speaking different languages because the language itself is different um and so the performance of the people on the measure that you that's dependent on that language will has the possibility of being different very interesting I I know you mentioned that there may be opportunities for further analysis and further research related to some of those variables and how they might affect the cognitive scores and certainly a language a study focused on language would be an interesting one and there's many different kinds of ways with the CLSA data that one can look at that we're just looking very superficially I would say at English versus French on the on the the overall performance on all measures together but we also have data about how many different languages people speak what those different languages are and so our look as I say is rather rudimentary just talking about whether they admit with the test word minister and responded to in French or English and we've removed those people that switch back and forth but many of those people also speak other languages and looking at those differences will also be very important that's not part of our project I believe there are other people who have identified multilingualism and bilingualism as an issue that they wish to look at perhaps more broadly than cognition but yes very interesting areas of research absolutely another question that we received from Chris says would it be possible to form a categorical score based on these measures that could be used to estimate a participant's cognitive status yes that's a short answer how one wishes to do that you'll have the flexibility with now with the derived scores that will be available one can use one's own classification determination to say what they want to how they want to classify people we that's why we're providing the percentiles so that people can classify them their own way with respect to the overall battery we could also provide impaired not impaired at different levels if people were interested in doing that enough doing that and having that available to them so yes there are various ways that one could then use the information to create whatever classifications they want okay thank you very much we have another question from a shock that says in the weighted analysis that was discussed you mentioned that weights accounted for missingness does this control for type of missingness or do we need analysis that addresses type of missingness such as MAR or NMAR okay that's a question for my stats person but I'm gonna I'm going to say that it depends so when we looked at the the analytic weights they were close to one and so that told us that our selected sample was similar to the CLSA sample overall and that we didn't need to be thinking beyond that if they weren't then I think you need to be looking more carefully at what's going on with the data um and in terms of really understanding that whole issue around missingness and what the weights can do I would suggest speaking with the CLSA experts on the weighting which is what we did to find out what we needed to do for our data but we did the I can't answer the question of what it would look like if things turned out differently but I do think you'd have to look more carefully into what's going on with the data sure absolutely and if folks who are attending the webinar today have further questions of this nature that they'd like to have answered I would encourage you to send it to the CLSA through the CLSA Gmail email address which is the one that you would access if you were contacting us through Eventbrite feel free to send your questions there and I'll make sure that they get to the right person that we get an answer to you um another question dr. Tuko that we received in the chat box is is the web-based input the same if an administrator or a questionnaire is asked or a questioner is asking it sorry so is the web-based input the same if an administrator or I'm assuming this this means interviewee is asking it um well uh it it would it doesn't matter who's asking it um the whoever's entering the data will get the responses I think maybe what they're getting at here is is this web-based tool available to participants to put their own scores in um and find out where they fall uh again we have not worked through all the issues around access to these tools whether it would be restricted access or open access um so uh at this point um I don't really have an answer for that if it's open access to everyone then anyone can put the scores in and see what the results would be okay thank you and the final question that we've received in the chat box says regarding the normative data for cognitive tests do you have an idea of the minimum and maximum plausible scores for these and they mentioned in parentheses the strupe test in particular ah we do have that information I can't tell you what it is off the top of my head um but certainly that will be available in any of the documents that we write up um yes all of and um just to look back can I go backwards here yes I can if we look back oops let me go forward here um there's the range of scores there so for the ray it's 1 to 15 for the animal fluency it was 0 to 50 for mental alternations it was 0 to 52 so uh the strupe we would have that same type of information available okay great thank you if you have any further questions folks please feel free to to type them in the chat box we do have just a few minutes left on the webinar so I'll try and get to these last few questions that have just come in um Dr. Chico Lauren asked will data be collected from the clinician tool to get a larger data set for creating normative standards again that is something we haven't determined yet um they I don't know the technical difficulties in doing that um so I would say that's one of those questions that is still out there for us to wrestle with before we finish this project okay sounds great and hopefully we can have um a future webinar with some updates to the research that's being done to continue this discussion moving forward another question from Andrew Patterson says presumably age group was not included in the French model due to the small sample size but surely you can split at the median or quartiles I believe again this is off the top of my head my status petition can give you the right answer or the more accurate I believe there was no association between age for the French samples okay thank you very much and then one final uh comment and question from Natalie Phillips she says uh Holly first on behalf of Canadian researchers I thank you and your team for this work and the question that she asked is I think the utility of making the tools available will depend on whether they are sensitive to normal versus clinical populations do you have a timeline for this sensitivity to be known uh well if if she's talking about uh whether it's sensitive to the neurologic self-reported neurological conditions in the CLSA baseline data we will know that in the in the next couple of months but as I said that might not be the best indicator because of the way the sample is selected to be pretty healthy people even if the shelf reporting a neurological condition it may be very very mild and not not be of significant severity for us to be able to pick up uh then the next step or the most important step will be going forward to see as people are developing conditions and declining are our measures uh sensitive to that so we will look at what we have with the baseline data and we will report on that relatively soon okay thank you very much uh I think that just about wraps up the questions that we've received through the chat function so thank you again Dr. Chico for this presentation we appreciate you participating in our webinar series for the attendees Dr. Chico did ask for input on the tools that are in development and I know a few of you shared some feedback via the chat function privately which I would be happy to share with Dr. Chico privately but if you have further feedback I would encourage you to send it uh either to the CLSA through the Gmail address on Eventbrite or with your permission Holly I can share your email address at the University of Victoria where folks can send some of their feedback that would be great and also some of these more technical statistical questions if if you want the need to speak with my statistician and get the the the truth skinny on these things feel free to send me that information and I will speak with her and get it back to you wonderful thank you very much so that concludes this webinar for today we do have an upcoming CLSA webinar on Thursday February 22nd where Dr. Dale Long will present the global importance of frailty and pre-frailty in middle-aged adults registration will be open next week on Eventbrite so make sure to keep an eye on your inbox for an invitation thank you again for attending today's webinar and we hope to talk to you again soon