 Good afternoon, everybody. I hope you can all hear me well. If not, just type into the chat that you're having problems hearing me, and we will correct the situation. My name is Mark Ramos, and I am an associate professor in the School of Public Health and Health Systems at the University of Waterloo. I'm also an associate scientific director for the Canadian Longitudinal Study on Aging, and I'm going to be hosting today's installment of the CLSA webinar series. It gives me great pleasure to introduce Dr. Mark Teaser. Mark completed his undergraduate medical and postgraduate adult neurology training at McGill, and after he completed his medical training, he also completed an MSc in epidemiology under the supervision of Tina Wolfson, who is one of the CLSA PIs, and to complete this degree, he was supported by a Fond de Richelche du Québec sans saverserie. He is currently a clinical research fellow at the National Hospital for Neurology and Neurosurgery in the United Kingdom, and he is working under the auspices of the International Sponsorship Scheme of the Royal College of Physicians in London. Mark's professional interests include several aspects of epilepsy research, including methods of case ascertainment, comorbid conditions and premature mortality, and also the treatment gap in epilepsy surgery. This webinar, the identification of adults with epilepsy in population-based studies, is going to describe some of Mark's work in developing a new screening instrument to identify adults with epilepsy. I think this is the first presentation where our presenter is actually out of the country and also not on the continent of North America. I believe Mark is presently in London to give this presentation, so about six hours later. So thank you very much, Mark, for accommodating us and the international time zones. The presentation will be approximately 40 to 45 minutes. We'll then have 10 to 15 minutes worth of questions. So, Mark, I turn things over to you. Thank you very much for joining us today. Thank you, Mark. I hope everyone can hear me all right. So, yes, thank you, Mark, for the introduction, and thank you, of course, for inviting me to speak. I was going to say this evening, but this afternoon, back in Canada. So the title of my talk today is the Identification of Adults with Epilepsy in Population-Based Studies. This is work that I did during the course of my MSc in Epidemiology that I did with Tina Wolfson at McGill. So, this is, of course, a talk that involves a lot about population-based research. I think a nice way of highlighting why population-based research is important is to talk about the clinician's fallacy. This is the assumption that one can truly understand a disease by only studying individuals that present medical attention. Of course, as you can imagine, this results in an iceberg phenomenon, the idea where you'll see what is above the surface, but you may miss what could be a lot of things underneath the surface. In epidemiological terms, there are, of course, related risks of selection bias, and at the very least, limited generalizability. So population-based research is important. Of course, when you start engaging in this type of research, there are additional challenges. In the context of the clinic, it's generally very easy, well relatively easy to understand who has or does not have a disease. It's usually relatively easy to apply your gold standard to all of the participants in the study. Usually you're often talking, at least, about dozens or maybe hundreds of people. But in the context of large population-based studies, you could be talking about tens of thousands, hundreds of thousands of people, and so applying your gold standard to all these individuals would obviously be impractical. So you have to use some sort of surrogate for your gold standard. That's, of course, a validated screening instrument. In the context of epilepsy, this often takes the form, or this almost invariably takes the form, in fact, of a questionnaire. So this is the outline of my talk today. The first part is going to try to be a quite, I guess, longer than it would usually be background section, because I know that this is a mixed audience, and so I wanted to make sure that I addressed at least some fundamental concepts, concepts that I'll be talking about throughout this talk. Then, as I mentioned before, this is talk, then the meat of this talk will be a lot about work that I did during my master's thesis. And so I'll be talking, like, I'll be talking about a systematic review that we conducted during that thesis. Then I'll talk about how we use what we learned during that systematic review to design a new case ascertainment algorithm, questionnaire algorithm, one that we subsequently entitled the CLSA epilepsy algorithm. And then finally, I'll talk about work that we did in validating this algorithm. So background. So of course, this is a talk on epilepsy in large part. So I wanted to define a few simple terms, well, fundamental terms. So in terms of the definition of epilepsy, the definition can be, you know, it's quite long, a full definition, but I think the heart of it revolves around this concept that it is an enduring predisposition to generate recurrent, unprovoked epileptic seizures. Essentially, what distinguishes someone with epilepsy and someone without epilepsy is not whether they have an epileptic seizure, it's whether they have this underlying predisposition that at some point they may have a seizure, in a sense, out of the blue. What is an epileptic seizure? Well, what's important to understand is that it's a transient occurrence. That is, of clinical signs and symptoms related to an abnormal excessive or synchronous neuronal activity of the brain. Of course, this is a talk about epilepsy, but also about screening questionnaires in epilepsy. And when we're talking about screening questionnaires, it's important to understand how we can validate these questionnaires, how we can understand how well they in fact perform. Measures of this test validity, of criterion validity, are the basic measures are sensitivity and specificity. A nice way of understanding them, and I'll see if I can get my point to the work, is by using a, what I have here, a two-by-two contingency table. Sensitivity can be defined as the probability that someone with the disease has a positive test. While specificity is defined as someone without the disease who has a negative test. In terms of screening instruments for epilepsy, there are examples that have been used in the past. There are examples that have been used in Canada. The National Population Health Survey and the Community Health Survey, there are two large Canadian studies, which are ongoing, which have used different epilepsy screening questions. You see here the question for the MPHS and the question for the CHS. What's important to understand, however, is that these are questions, in fact, have not been validated. So in fact, we don't unfortunately know the sensitivity and specificity. What we do know is that in relatively similar questions, this question, which has been validated by the reference you see down here, Ruth Altman at Columbia University in New York, has in fact been shown to have a sensitivity of only 76%. In other words, if you were to ask 100 people with epilepsy this question, 24 would say no. I think this is the reason why we should be cautious when we're using single questions to try and identify people with epilepsy. I think it's important to point out that this doesn't seem to be a problem that's specific to epilepsy, that's only in epilepsy. In fact, it's difficult. It can be single-item questionnaires that these are not necessarily very accurate in other diseases as well. If you ask people with Parkinson's disease this first question here, in fact, the sensitivity has been found to be only 89%. If you ask people with migraine this question here, the sensitivity has only been found to be 76%. So this is a problem that can be found with many disorders. So I'm trying to illustrate some of the challenges that you can find when you're trying to study epilepsy in the general population. And I've compared some other diseases, I've just now compared similar challenges in other diseases. One thing that is a little bit more particular to epilepsy has to do with this concept of absorbing states. So absorbing states are actually just a general statistical term stating that an absorbing state is one that once it occurs, once done, it cannot be undone. In the context of disease, an absorbing state is one that cannot be cured, which is a non-absorbing state, which is one that can be cured. So for example, Parkinson's disease, I think quite obviously, idiopathic Parkinson's disease, is an absorbing state. Once someone has Parkinson's disease, they'll always have it. Epilepsy, however, that is not necessarily the case. So you may have epilepsy when you are 10 years old, but when you're 6 years old, you may no longer have epilepsy. So this actually adds an extra level of complexity to its study. To illustrate this a bit, I just wanted to bring you back to these questions that I showed you at the beginning. Well, not at the beginning, but just earlier. Here are the two questions, the one that was used by the NPHS and the one by the CHS. And you see that in their question, they ask whether you have epilepsy. Right to the use of present tense. In fact, what they're measuring is active epilepsy. Whereas the Altman question, yes, the Altman question, which I showed you, the one with the sensitivity of only 76%, she used the verb had. In fact, she was asking whether you not only have epilepsy now, but whether you've ever had it in the past. This is measuring something that's quite different. This is measuring the lifetime history of epilepsy. Just to describe the importance of this distinction, it helps to know that in terms of prevalence, generally, the prevalence of a lifetime history of epilepsy is about twice that of active epilepsy. So understanding the distinction to the two is important. So this is a talk about studying epilepsy in the general population. Of course, the motivation for me, the motivation for this, the work that I'm going to present to you soon, is to pay and launch student study on aging. Of course, given the context of the talk, I presume that most of you know quite a bit about the CLSA, just to summarize it briefly for those of you who are not as familiar with it. This is a very large, ambitious project that was started a few years ago. It first started recruiting the participants in 2012, whose goal is to recruit 50,000 people. This goal has almost been achieved, or may have been achieved since not that early, who were aged between 45 and 85 years old at the time of recruitment, who will then be followed for the next 20 years, during which they will undergo regular assessments, either telephone interviews, in-home interviews, or more comprehensive assessments in the data collection site. Like most large population studies, there was, of course, great effort put into designing the questionnaires for the study, but more at a, let's say, macro level. That, of course, as you can imagine when you have hundreds and hundreds of questions, you cannot possibly go through and validate every single item. There have been more recent efforts, and this is a paper that was published by our moderator, Marco Reimers, just a few years ago, where they validated seven different algorithms for seven different diseases. There was one neurological condition included in this one for Parkinson's disease, an algorithm for Parkinson's disease. Presently in the CLSA, there is, however, there remains just a single question for, presently, I should say, up until this summer, there is a single question for epilepsy. The question is, has a doctor ever told you that you have epilepsy? And so if we compare this to that Otman question, we would wonder, although we're not, we can't say for sure, we would wonder whether the sensitivity of this question is, again, only in the 70s or so, 70% or so. Within the, sorry, the CLSA, there is this neurological conditions initiative. So this is a study for which Christina Wilson is the principal investigator. Look, the idea behind this was to augment the investigation of neurological diseases within the CLSA to maximize sort of their investigation. Part of that, of course, is improving the way in which neurological conditions are identified. So that brings me, I suppose, to the meat of this talk. So I want to begin with a systematic review that we carried out looking at previously validated epilepsy screening questionnaires. So I'm going to go over relatively quickly here. If you are interested in learning more about it, of course, it was recently published in Epilepsia just towards the end of last year. To describe it briefly, so this was a systematic review of diagnostic slash screening studies. And our eligibility criteria for studies to be included in this were relatively straightforward. We were looking for studies that measured sensitivity and specificity of non-physician administered screening questionnaires in adults. We were open to questionnaires that had been validated in either a population or a hospital-based cohort, questionnaires that had been administered either over the telephone or in person. We kept our search strategy relatively, given the resources we had at the time, we kept our search strategy relatively limited to two databases, the two major databases, of course, Medline and M-Base. We did not have any language restrictions. We assessed qualities in the Quadis II tool as recommended by the Cochrane group. We did not perform any meta-analyses. There's simply too much heterogeneity between these studies as I'll show you in a moment. We did not perform any formal statistical tests for heterogeneity or publication bias either in case you're wondering. For those of you who are wondering, simply because my feeling and I think what a lot of people have written about is how these have really been designed and validated for use in reviews of trials, not for observational studies, not for diagnostic studies that they have to be used with great caution outside of trials, that there are other ways of looking for heterogeneity in diagnostic studies, more descriptive ways, of course, but still, and that for publication bias, when it comes to diagnostic studies, I think the safest assumption, the safest thing is to simply assume that there is publication bias. In fact, whenever you're looking at observational studies, assume there is publication bias, irrespective of what a phone plot or an egress test may tell you. So here is the results of our search for the systematic review. We identified initially 917 articles, which went through two levels of screening here, and we finally ended up with nine studies, nine validation studies. Six which looked at the lifetime history of epilepsy, and three that looked at active epilepsy. Here is our tables study characteristics, and I hope yours isn't sort of blocked out by this toolbar like mine is, but I'll assume it isn't. I won't go through all the details of this table. It's a very busy table, I understand. One thing I did want to point out is highlighted by this asterisk, the study by Altman. So this is a study that stood out to us for several reasons. First of all, of course, all of this work was being done with the CLSA in mind. Of course, the CLSA is being administered in two languages, English and French. And so the Altman, first of all, was one of the few validation studies that was actually done in English. It's a bit surprising considering how much of the literature is in English now, but it was one of the few. It was also one of the few that had been validated in a population-based cohort. And of these, it was the one that then at the same time had the highest sensitivity and sensitivity of 95.8%. It's quite good, rather than the others at least. So let's certainly pique our interest. Another thing that stood out about the Altman tool was that in terms of its quality, the quality of this validation study was quite good. Here, I was about to say high, but I should specify that anything in red here in this table is not good. That means a high risk of bias or an unclear risk of bias or a high risk of, or high issues with applicability. So in fact, Altman, again, marked by an asterisk, had one of the best quality assessments of all of these validation studies. So again, another thing that caught our attention. So for this, we concluded that we had identified non-validation studies. In the paper, we talk about various sources of heterogeneity between the different studies, involving not only the populations in which these tools were validated, but also the questionnaires themselves. Of course, there was very few of these questionnaires where the same questionnaire was using different studies. It generally was a new questionnaire every single time. And even if the criteria condition was different, right? Active epilepsy versus lifetime history of epilepsy. That there are concerns about study quality in the majority of studies, but there did seem to be a possible advantage with the Altman study. And so this led us to, this sort of brings me to how we started then to design our own questionnaire and algorithm. What we've called now the CLSA epilepsy algorithm. So in fact, what we did is we tried to learn from the past. So we used, here you have the Altman questionnaire. And I'll talk in a minute about what I mean by algorithm. So these are the questionnaires. These are the questions that Altman used and validated in her study. You see here, question number two is the one that I described before. The one asking, have you ever had a seizure disorder epilepsy? The one I was shown to have since to be a 76%. There were other questions, ones that asked about history of several convulsions. And then these, what we've called the symptom based questions where these don't ask whether you have epilepsy now. But these are instead asking about whether you had symptoms that could be suggestive of epilepsy. Things like, do you have unusual spells? Or have you had, where is it similar to ask whether you have spacing out when you're younger? Similar things. That's the questionnaire that they used. What, I've been using the term algorithm a few number, quite a bit already. And I suppose it's got time and I define what I mean. So the questionnaire, the questions that you ask to the individual. The algorithm is what you use to figure out what a positive test actually is. Right, if you only have one question in your questionnaire, then it's very easy to figure out what a positive test is. You say, well, if they answer yes to my one question, then that's a positive test. We have multiple questions. You have to decide what is your threshold to say that overall it's positive. You say that if they answer yes to two of the questions, that's good enough. Do they have to answer to three of the questions? And that's good enough that they have to answer to one, this one here and then one of these, whatever combination we come up with. That's the algorithm. That's how you decide what a positive test is. So the one that Ruth Ottmann's group, they used a number of different definitions of a positive test. The one that seems to have performed the best was when they used this one where they said, they decided that an affirmative response to any of these questions, was sufficient to say that the screen was positive and that was the algorithm that they used. And this is the sensitivity that they found using that. I haven't been talking about specificity in the context of the test because in fact, in the context of the study, because in fact they didn't report specificity. They talked about false positive rates and so forth, but it's a bit complicated. I don't think I have time now to go into that. So this is what they had done. So our goals of the epilepsy, the CSA epilepsy algorithm, we have five major goals. One way to distinguish between active and lifetime history of epilepsy, the Ottmann questionnaire only identifies those in a lifetime history. It does not identify those with active epilepsy. We also wanted to add a certain level of complexity to the positive screen. We wanted to, excuse me, we wanted to be able to distinguish between levels of certainty in the screen. So we wanted to distinguish between probable and suspect epilepsy. In the end, what we wanted to be able to say is that if you said yes to the, I have epilepsy question, that meant you have probable epilepsy. But if you said no to that question, but said yes to something like say I have unusual spells, that didn't mean you had to distinguish that from probable epilepsy. Instead you should call that suspect epilepsy to reflect the fact that we were a little bit less certain about, I don't know, epilepsy or not. We wanted to incorporate the use of anti-epileptic drugs into the algorithm to see what effect that would have on the performance of the algorithm. We of course, given the context of the CSA, we wanted to develop a French language instrument. Then finally, we wanted to have a priori algorithms. We wanted to have a priori definitions of screen positives to try and minimize biases that can happen with post-hoc analysis. This is still, of course, an explorative study to a large degree, but we wanted to try and do something to control that. So here is the English language questionnaire that we used. You'll see that. Next first question, and all of these symptom-based questions are directly from Ottman, the questionnaire that I've already shown you. This question is also one that was described by Ottman. It wasn't part of the primary questionnaire, but it was part of sort of a secondary questionnaire. And these down here are the ones that we added to distinguish between active and inactive epilepsy. That's the questionnaire that we used. And these are the algorithms that we had that we wanted to test. So our ways of defining a positive screen, in other words. I'll go through these relatively quickly, but I hope I'll be able to explain them relatively clearly. So for, we'll use, there are two of them, two versions, CSEA1 and CSEA2. So for CSEA1, it begins with the self-report diagnosis question. Do you, have you ever had epilepsy or seizure disorder? If you respond in the affirmative to that question, then the next step in the algorithm is your response to the anti-epileptic drug question. If you also respond in the affirmative to that question, then according to the algorithm, you would be classified as probable epilepsy. So you said yes to epilepsy, yes to drugs for epilepsy, and so you probably have epilepsy. If you say yes to epilepsy but no to the drug question, then that causes us some uncertainty. And so we, instead of classifying as probable epilepsy, we classify you as suspect epilepsy. This is where we incorporated the symptom-based questions. These are individuals. This is part of the algorithm. You only come to this part of the algorithm if you say no to the epilepsy question. But yes to, for example, you suffer from unusual spells. We don't, we're unwilling to say that you have probable epilepsy, but we're willing to say you have suspect epilepsy. And then of course if you say no to everything, you have no epilepsy. That explains this sort of top level above this line of the algorithm. The level that's below here is dishonest distinguish between those with active and inactive disease. This is where we ask those questions about whether you've had a seizure within the last five years or whether you're currently on anti-epileptic drugs. The CLSA2EA2 is sort of an alternative to this first one with a relatively modest change, but one that we thought could have a difference. It could have a significant impact on the results. The difference is really here. In the CLSA1, if you say no to the epilepsy question, the best, the most you'll ever be considered is suspect epilepsy. So the CLSA2, even if you say no to the epilepsy question, if you do say yes to one of these symptom-based questions and also yes to the drug question, then you will be classified as part of the epilepsy. This is a different way of defining a test positive. Of course, once we have developed all this, we need to translate it. And so this is done using basic, I think standard cross-cultural translation techniques where one translator translated the English questions into French, a second translator translated it from French to English, and then all three were compared by a bilingual investigator that was myself. And here are those French language questions. So of course, then the final part of this is the validation of the CLSAEA, the epilepsy algorithm. This too was recently published in Epilepsy. It was funded in, it was published towards the end of last year. So the goal was very briefly, well relatively briefly. So our goal of course was to validate this in the CLSAEA, using CLSAEA participants because so much has gone into ensuring that they are a representative, a random representative sample of the general population. So we began by recruiting participants that were part of the regular CLSAEA cohort, as well as participants that were part of the CLSAEA pilot. We recruited consecutive CLSAEA participants in terms of, we recruited them in the order in which they had been initially recruited into the CLSAEA. Although we did use some stratified sampling to try and create some balance between English and French speaking participants. We soon of course discovered or understood the limitations of our resources and the fact that the prevalence of epilepsy in the general population is relatively low. And so if we limit ourselves to CLSAEA participants, it has been very difficult to get the kind of numbers necessary to find to have enough participants with epilepsy. So we had to use an extra source of participants. These are the MNI, the Montreal Neurological Institute participants. What we did is we drew, we recruited individuals from what we've termed an epilepsy-enriched general neurology clinic at the MNI. This is basically a clinic where, although a lot of the participants, I guess in this context I should say, do have epilepsy, not all of them have epilepsy. That had certain advantages for us, as I'll explain in a second. To go through how we administered, how we carried out the study. For the CLSAEA participants, they were telephoned, they were recruited, they were invited to come to the Montreal Data Collection site. When they arrived, they were consented, and then the questions from the questionnaire were administered by a research assistant. It was important that this research assistant was entirely unaware of whether the person had epilepsy or not. This was done to minimize interior bias, of course. Then, following that, they referred to me. I was active as the reference standard, sort of like the gold standard. And I would, using general neurological carrying out the standard history and physical examination as necessary, I would determine whether the person, in fact, had epilepsy or not. It was important that I was unaware of the results of the questionnaire at this point in order to avoid what's been referred to as verification bias, to make sure that I'm not influenced by the results of the questionnaire. We did carry out a small reliability study to ensure that looking at the quality of my diagnosis, comparing them to that of another neurologist, Nathalie Jerte, found that there was perfectly agreement between our diagnosis between us. The participant seemed like a neuro. It was a similar process, but here instead of being seen by the neurologist first, the neurologist was, of course, unaware of the results of the questionnaire because it hadn't been carried out yet. After being consented, they referred to research assistants who administered the questionnaire. Again, what was important here was that we did our best to ensure that the interviewer was unaware of the epilepsy status of the individual. So this is why it was good to have a general neurology clinic so that there is still some doubt in the interviewer's mind whether the person had epilepsy or not. To your biases, of course, that when you're administering a questionnaire, if you know the person has a disease, in our case epilepsy, it may influence the way you ask the questions. Here are the participant characteristics. So we ended up recruiting 242 individuals, 34 of whom had epilepsy, and 208 of whom did not have epilepsy. You see also that the vast majority, 33 of the people of the individuals with epilepsy came from the MNI. Here's our full diagram just showing the breakdown of participants and how things went, I suppose. You see here the number of participants that were approached who we attempted to recruit. There were a number of who refused. We had a participation rate of about 85%, just above 85%, our 242 individuals. I won't go through the rest of the flow diagram, but instead, I'll show you our results. So this is a summary table showing the sensitivity of the estimates as well as predictive values. Listen up here. For the questionnaire and algorithm, you see here lifetime history of epilepsy versus whether we're using it to detect or identify people with active epilepsy. And what we found was that the CLS-EA2, using probable only as our definition of a screen positive, had the highest sensitivity and specificity for both lifetime history and active epilepsy. To illustrate, to remind you of what that actually means, it means that the CLS-A2 appeared to be the one that performed better than the CLS-EA1 when we considered probable epilepsy only as our definition of a screen positive. So basically if you're in this part of the algorithm, you're considered positive as per the questionnaire. And if you're anywhere in here, it was a negative screen. We found that using it this way, it performed the best. These are a few final observations. So we did compare how people using these English language questionnaires versus the French language questionnaire performed if they did not find any differences in the sensitivity and specificity between these two groups. It is important to highlight that this study is at risk of spectrum effects because we used participants that were not from the general population because we used people from MNI. Such spectrum effects essentially is this idea that whenever you're testing how well a test performs, all you use is very sick individuals and very healthy individuals, then you're at risk of having inflated sensitivity and specificity estimates. Because it's much easier for a test to figure out whether someone who is very sick has a disease or someone who is very healthy doesn't have a disease. It's often much more difficult for a test to figure out all that gray area in between. That said, when you look at our single, if you just look at the epilepsy question, that single self-report screening question for epilepsy, if you look at the sensitivity of it, according to our study, we found that it had a sensitivity of only 74%. This is very similar to the 76% reported by outment. So this would suggest that in fact we weren't suffering too much in terms of spectrum effects. That every improvement we demonstrate above this 74% was due to the questionnaire and algorithm that we used and less due to bias in such spectrum effects. Our conclusion was that the epilepsy algorithm appears to have a higher sensitivity and specificity than other previously validated population-based instruments. That's one of the fewest things that can be active in the lifetime history of epilepsy. We have now a validated French language instrument. There's a future direction, so I'm happy to say, or I found out, I think it was sometime last year too many, correct me, that it's been approved for inclusion into the CSA. So that single question that I showed you earlier on that was in the CSA previously was now going to be replaced with this algorithm beginning this summer. The next wave. And we now have plans also to develop algorithms for migraine and we're looking at, although there's already been an algorithm that's been validated for Parkinsonism, we're planning a systematic review to look at what other algorithms have been validated in the past. Of course, our students, along with acknowledgments, Tina Wilson, who I've worked with, in fact, we're recently reminiscing how we've worked together since I was a medical student over 10 years ago, who was, of course, my supervisor throughout all this work and during the course of my master's, Nali Zerte, who is on my thesis supervisory committee, who has also helped work with us a lot on these projects, Nathanael Veira at the Neuro and a number of research assistants and actually one neurologist as well. These are the funding agencies that have been involved. Thank you. Yes, thank you. Great. Thank you very much, Mark, for this very interesting presentation, especially for me since I've done some algorithm validation work as well. If there are any questions for Mark, please type those questions using the chat feature and I will read them out to Mark and to the audience. Moving for question, I have one question and Mark, that is, as you were evaluating the results and the performance of the algorithm, did it ever come up that the algorithm is intended for use in population-based studies, most of your cases of epilepsy actually came from patients at the Neurome and only one of your epilepsy cases came from people who were enrolled in a population-based study. So did that raise any concerns from you or the research team? That was actually one of my bigger concerns during this study. I think this gets to the idea of spectrum effects, that if you're using cases from a specialized medical setting, especially, then it's possible that you're going to end up with inflated sensitivity results. Ideally, we would have only used individuals from the general population. We've only relied on participants from the CLSA. Unfortunately, with the resources we had, this wasn't possible. Yes. Okay, so then would you say that in a study, the sensitivity or specificity would be a little bit lower? It's possible. I think that's why it was interesting to find that when we compared the single epilepsy question, when we compared the sensitivity that we found with that ottoman, that it was interesting that they were very similar. So to us, this suggests that in fact, problems of spectrum effects may not have been too great. It's impossible to say for sure, but this was a sort of a clue that maybe things weren't too bad. I see great. So I'm seeing that there are a couple of people typing in chat messages. So I imagine they might be typing in questions for you while we're waiting for their questions to appear on chat. Just another quick question about the identification of chronic disease in population-based studies. This is more of a general question. I don't know if this was discussed by yourself and the research team, but what might be some of the advantages? If you wanted to identify, for example, the incidence and prevalence of epilepsy or some other chronic disease, what might be some of the advantages of a study like CLSA and not respect? A study of chronic disease. Well, I guess the fact that it's population-based is the advantage in this longitudinal nature that you've identified a fixed cohort at the very beginning and that you can follow them over time and witness them as their disease is developed, as they evolve. Yes. It's the advantage of being population-based. When you rely on a clinic, for instance, to study chronic disease, it is a huge potential for all kinds of selection. Yes. I think that's the major advantage. Right. I would agree. Yeah, I would agree with that. A question. What about medications that are used both for epilepsy and other disorders? That could have affected the performance of the questionnaire and algorithm, certainly. We could have added more to the questions to try and ensure that that wasn't the case. The specific question, if I remember correctly now, I don't have it in front of me, was, have you ever been on medications for seizures? So, the designer of the question itself should have limited false positives due to people taking, for instance, pregabalin for perfumeropathy or neuropathic pain rather than epilepsy. So, that said, I mean, when you're designing these questionnaires, you can never make them perfect. You do your best, but you have to sort of make certain concessions as well to make them usable. And so, in the end, you have to take what you have and see how it works. In the case of the use of anti-epileptic drugs and whether it is for non-epilepsy indication, with our algorithm, I think the chances that that would, even if the question weren't as specific as it is, that I think the chances that it would create false positives would be relatively low because you would have to not only answer incorrectly about the drug question, but you'd have to answer incorrectly about the epilepsy question as well. So, yes, you trade these all and I suppose a few different strategies to try and control for that. Great. So, another question. It says, thanks, Mark, for a great talk. Curious what your thoughts are about using the instrument as self-administer versus administer. Some population-based studies may not have the luxury of having an interviewer administer. Of course, yeah, when you're validating something, it's always been valid for your specific circumstance, right? So, any transportability of your findings to other situations, other scenarios, you can't be certain how it's going to work. So, just going from an interviewer-administered questionnaire to a self-administered questionnaire that can have an impact. Ideally, what you would do is you would, in fact, carry out another validation study to see how, what impact it would have. I don't think that any one validation, like doing a single-validation study is sufficient to, that it doesn't say everything about any test, that it makes sense and often, and sometimes is very important to repeat the validation study in different populations. Administered in different ways to see how it performs, but it could have an impact. It's true. I think that some algorithm questionnaires can probably be self-administered versus interviewed with an minister without any problem, depending on the type of question. But I do agree that we wouldn't want to do validation to investigate whether there is the same type of results are generated in one method versus another. Another question. Once the tool is included in the CLSA, how will you use these data? So, I think, first of all, of course, use it descriptively to simply understand the prevalence of both lifetime history of epilepsy and active epilepsy. We'll also use it to enumerate a cohort. So, to enumerate those people in the CLSA who are affected by this disease, so that we then can carry out various other studies. There's an enormous amount of data that's being collected by the CLSA, both cross-sectionally and then longitudinally. And so, by using this, we hope that we have a better way of figuring out who has epilepsy, so that we can then look at what are factors associated with their epilepsy, what are the things that, what are, in terms of their prognosis of epilepsy, what are vulnerabilities associated with their epilepsy, and so forth. It allows us to do that as well. Exactly. And I think that in a study as big as the CLSA, it's just not possible to send all of our participants to see specialist physicians to get clinical diagnoses of well over 10 different types of chronic disease. So, we're measuring more in CLSA besides epilepsy. So, we can't send all of our participants to 10 or 15 specialist physicians. So, we need to use algorithms like this to obtain information on the presence or absence of disease. And I think Mark, you're right. Once we can identify people using this algorithm in CLSA as having or not having epilepsy, we can look at the prevalence of epilepsy. We can look at the incidence of epilepsy over time and study risk factors for epilepsy. Okay. So, there's another couple of questions here. Would it be possible in the future to use data linkage with medical records in the CLSA to further validate the algorithm? Hmm. That's interesting. Well, the thing is though, again, you would be using one surrogate to test another surrogate. So, I'm not sure if you could do that because neither of them would be the reference standard per se, because I have in mind that the data you'd be linking to would be administrative data, diagnostic ICD codes, for instance, which in and of themselves require validation and have been validated, but aren't perfect just like this algorithm. It would certainly be interesting to compare the two. In fact, it would be interesting if you compared, I suppose, the three. So, you compared the algorithm to ICD codes to clinical assessment. Yeah, it's very interesting. Great. Thanks. Would you anticipate new cases of epilepsy over time in the CLSA given that participants are already at least 45 years of age upon enrollment? Yes, you would. There's actually a bimodal distribution to the incidence of epilepsy. So, you have the smaller peak, in fact, is in younger age groups. The larger peak is in older age groups. The most common cause is being supervascular disease and neurodegenerative disease, Alzheimer's disease, for instance. So, you would certainly expect new cases in people aged over 45 years old. Great. So, another question. Thank you, Mark, for this presentation. In the population that you screened, did you notice that there was a prompting effect of the questionnaire so that people self-disclose their unusual episodes in the past before you last them? This question is not specific to your presentation, but is related to the CLSA and epilepsy. There seem to be a wide range of prevalence rates out there with respect to the prevalence of epilepsy in seniors. Do you have a preferred estimate for epilepsy prevalence in people 65 and older? So, this is a two-part question, yes. So, the first part is about the prompting effect. This is very interesting. So, and this is more in reference to, I think, the questionnaire itself and when it was administered. I didn't administer the questionnaires. I was administered by a research assistant and it was actually important that I wasn't aware of how each individual responded to the questionnaire. When it came to the questionnaire, the idea was that they were read out in a very dry, not overly dry, but in a somewhat controlled manner so that it was reproducible, so that it was standardized the way these questions were asked of each individual. So, if they had started self-disclosing additional facts and so forth, I don't think they would have had an impact on the questionnaire. For me, as the reference standard, yes, sometimes people would self-disclose all sorts of things, but I wasn't too worried about that because our goal stands for the clinical diagnosis. And so it was very, I did my best to carry out what I would have considered, what I consider a normal sort of clinical interview, which would include, of course, spontaneous things that the patient may, the participant may say versus what I had asked them directly. The second question is, there seems to be a wide prevalence of rates. So, prevalence of epilepsy in seniors, do I have a preferred estimate for epilepsy prevalence in people 65 and older? That's interesting. I don't have a preferred number. Of course, you have to see it in lifetime history and active. In fact, I'd have to look back. I'd have to refer to a primary source to know whether it would be any different than what I use the quote, which is 0.5% for active, that's like 0.1% for lifetime history. So, I can give a better answer than that. Great. So, there's one person saying thank you for the great and informative talk and another thank you and one last question because we're already just a little bit over. So, this will be our last question. Have you thought of adding the question, is there anyone in your family history that is known or was known to have active epilepsy or had epilepsy at some point in their life? So, family history question. No, we hadn't thought about adding that. It had been something to consider. We did do our, when we were deciding on what we were going to use as questionnaire, we felt it was best to use an already existing instrument and to build upon that because in a way that reduced the number of unknown variables, right? And so, that would have, adding a question like that would have meant adding something to the, something additional to the autumne questionnaire that may, without an extremely expressed purpose. We added other questions like the medication one, well, not so much the medication one, but more the one for active epilepsy, but that was for a very specific purpose because it was something that was clearly missing in the original one. But we would have been reluctant to add something like the family history question to the autumne questionnaire. Great. Thank you very much, Mark, for this excellent presentation. I enjoyed it and I'm sure everybody else did. Very informative and always interesting to just personally speaking to see other people doing the work that was similar to some of the work that I've done. So, again, thank you so much for agreeing to present today. We really appreciate it. My pleasure. Thank you for having me. Great. And we're getting a few more thank yous. Interesting webinar. Thanks. Thank you. Thanks, Mark. So, certainly the audience seemed to appreciate it. And just before we sign off, a bit of a plug for our next CLSA webinar. It's going to be May 14th from 2 to 3 p.m. Eastern Time. Verena Menek from University of Manitoba will be presenting a webinar on age-supported environments and healthy aging. This presentation is going to highlight some of Verena's program of research into age-friendly communities. So, that will be interesting. We look forward to that and we thank Mark again. Thank you. And thanks, everyone, for joining us today, and we hope to see you in a month. And enjoy the rest of today and be nice weather. Bye now.