 Today I'm talking about a new strategy that we have developed at NAL to measure the amount of effort or cognitive resources that that requires to communicate in a challenging scenario and some evidence supporting the use of this method in a clinic environment. It's very likely that you know somebody or if you're a clinician that you have had a client reporting that they struggled following conversations when there is noise around. So in fact NAL research has shown that in Australia around 37% of the population has difficulties understanding the speech and noise many of them even without actually having hearing loss. This is a serious concern not only because of the prevalence of these difficulties but above all because of the impact that these difficulties have in the quality of life of the people who suffer them. So to understand this deeper we sent online questionnaires all around the world and did personal interviews to the people who have these difficulties and also the clinicians who treat them. For example when person reported I have to try harder to hear I can't always hear what they are speaking to me about. It takes a lot of concentration. Another one said other people must be able to filter that background noise and pull it down to a lower level so that they can focus on conversation so I must have a problem because I can't do that. It does take some of the pleasure of being around people and on the side of clinicians they mentioned that there isn't really a test that we have available to show whether someone has an abnormally high difficulty with noise compared to other people. So the reality is that there exists a large group of people with serious hearing difficulties who cannot be helped because their problems are not adequately diagnosed. At NAL we have ideated a new methodology that could help clinicians characterize the hearing difficulties of this population so they are able to provide individually oriented solutions. Let's start by understanding the model behind listening effort. So listening effort can be understood through the ease of language understanding or ELU model. So it's a complex model but in a nutshell it basically says that if there is a match between what you hear and the preceding context then understanding the message is straightforward. However if there is a mismatch between what you hear and your language neural structures and the background context and everything so that leads to extra cognitive resources that need to be dedicated to filling the gaps and try to extract the meaning from what you hear. People with hearing difficulties receive an acoustic signal that is degraded which eventually leads to an extra allocation of cognitive resources and as a result people who suffer from these difficulties require an extra effort to communicate with their peers. What have we done to measure this effort? So while listening effort can be measured using behavioral tests based on a dual task basically two tasks that are conducted simultaneously. Here the primary task is usually a speech noise test in which the participants are asked to understand a sentence presented in background noise and repeat it back and the secondary task is a task of a different nature. It can be simple something like pressing a button when you see a red light or a bit more complex like driving a car simulator. The idea behind the dual task is that the more effortful the primary task is the less cognitive resources will be available to do the secondary task. Therefore measuring performance in the secondary task perhaps using reaction time we can infer how effortful is to understand a message in the primary task. We can illustrate this with an example. Let's say let's say that so this picture illustrates that humans have a fixed cognitive capacity that is illustrated with the spy chart and let's imagine that we are trying to understand a speech in a situation with very low very low noise. So in this case understanding a speech requires only a few cognitive resources as illustrated here leaving a large amount of cognitive resources to the secondary task. Due to this amount of cognitive resources the behavioral task can be done very fast so we will leave to short reaction times. On the other hand if we have a situation in which the speech was presented in a high level of noise then we will need to dedicate a lot of cognitive resources to understand this speech because as we saw here in the ease of language understanding model we need to dedicate these extra cognitive resources leaving them leaving a few spare cognitive resources to do the secondary task and as a result that leads to long reaction time so people can do the task but it takes longer for them to do the task. So this is the basis of a dual task. In addition to this we also just asked the participants to rate their perceived effort in a scale one to seven so from no effort to extreme effort. Our dual task consisted of a primary task as I said in which the participants had to repeat back a sentence. So the sentences were constructed from the Australian version of the matrix test that are composed basically of five words starting always with a name. The secondary task was an auditory visual task and in this task two vertical rectangles appeared on the screen situated in front of the participant and at the onset of the sentence a large circle appears in one of the rectangles something like this and the participant is instructed to press the arrow pointing towards the circle if the name of the person is from a male and the arrow pointing away the circle if it's a female name. So in this example because Mark is a name from a male then we would press the arrow pointing towards the circle. In this test we can measure intelligibility by counting how many correctly identified words by the participant and also the reaction time from the onset of the sentence to the button press. So this test took place in the Anicoid chamber of the Australian hearing hub here in Sydney in Australia. The target was presented from the front speaker while realistic cafeteria noise was presented from an ambisonic array of 41 speakers. We tested different signal to noise radios which were adjusted to the 50%, 80% and 95% intelligibility to evaluate scenarios that require different levels of effort. We call these scenarios SRT-50 to this one SRT-80 and SRT-95. This is a very challenging scenario. This is an easier scenario. In this test noise was fixed and we varied the level of the stimulus to reach different SNRs. Here we get. Well we tested five normal hearing individuals all of them in their middle age and proficient in English. In this figure we show that well each color represents a different subject and we see that we tested the different SRTs. So this is around 50% intelligibility, around 95% intelligibility. We also tested some higher SNRs around SRT-95 and we also tested a quiet condition in which no background noise was used. As expected intelligibility or percentage of correct words increases as the signal to noise increase. So overall we see no surprises here because this test was designed to obtain this performance. Self-perceived effort was evaluated in a seven-point scale so from no effort to extreme effort and again its color belongs to a different subject and the color thin lines are fitted to the performance of each individual while the black thick line refers to the group behavior. This figure shows that their self-reported effort was associated with the difficulty of the task. Remember noise was fixed in all conditions except in quiet that we didn't use noise so they cleared the speech the lower they rated their effort. This result could validate self-reported assessment as a reliable measure of effort and this has some implications for example this indicates that we could use on-site assessment to evaluate the effort of a client. We also analyzed the participant's reaction time to the secondary task as a proxy of their effort so longer reaction times would indicate more effort in understanding speech. So we were expecting longer reaction times around the more effortful scenario and shorter reaction times in the more positive range of SNRs. Our results are consistent with the predictions and indeed reaction time decreased as the SNR improved. This is shown in the group behavior in the black thick line. We also see a certain amount of variability between individuals so we see some of them were performing around 700 milliseconds while others were above one second right but overall these results demonstrate that the reaction time of the proposed dual task is a sensitive measure of effort in a broad range of SNRs from very challenging scenarios like SRT-50 to a more realistic scenarios like SRT-95 which could reproduce a noisy cafeteria. Probably the most interesting bit comes when we compare the results obtained from the normal hearing group with another group of hearing impaired participants. So we replicated the study with six hearing impaired subjects. This figure shows the audiogram of the six participants. We see they had bilateral downward sloping hearing loss and well they were all experienced hearing aid users. An important difference between the two groups which is somehow also a limitation of this comparison is the age difference between the two groups. While the normal hearing group is in their middle age the hearing impaired group are mostly older adults. Nevertheless it's also important to note that despite the age and the hearing differences the SNR was adapted in each individual to match their 50% and 95% intelligibility. So this means we are comparing SNRs of equal intelligibility between the two groups. Well results showed that the older and hearing impaired participants presented significantly longer reaction times than their normal hearing groups. So this result is again consistent with our prediction as longer reaction time in the secondary task is associated with the extra effort that this population report. These results also point out that this methodology could be used to diagnose individuals with a normal difficulty communicating noise. So this would add a new dimension to the clinical test battery currently and explore and could inform a clinician of the real problems of this population so they can provide tailored solutions to their clients. At the moment we are investigating the clinical value of this methodology. We believe that reaction time of the proposed dual task could characterize the listening concerns reported by individuals who have a normal audiogram. So we plan to characterize the performance in the secondary task so the reaction time bringing meaningful variables that explain part of the variability that we are observing. So for example the age, their hearing, their English proficiency and also their cognitive capacity. We think that all these variables explain part of the variability so it's important to take them into account. And then once we have characterized their performance we could categorize individuals based on their performance. We anticipate that people struggling communicating in noise would present longer reaction times and those in a safe zone would present shorter reaction times. Again we think there is a need to bring the dimension of listening effort to the clinic and this is how we are exploring the clinical utility of our method. So to wrap up, a null approach based on a dual task and self-reported effort is sensitive to listening effort. We have seen that. The longer reaction time observing the older and hearing impaired group is consistent with the extra effort that they require to communicate in challenging scenarios and we are currently investigating the clinical value of the proposed methodology in order to improve the diagnosis of the hearing difficulties of this population. So I would like to thank you all for your attention, my team at NAL and also the department of health of the steering government for supporting NAL research.