 Hello, good morning everybody. It's David Ming here from National Acoustic Laboratories. I'm very happy to be able to share with you some of the results from a recent research project conducted, and now it's called Neural Tracking of Linguistic Information as a measure of speech understanding in noise. The bring responses has been reported to track not only the acoustic features in speech, but also hierarchical linguistic information embedded in connected speech by Ding et al. in their 2016 paper. This novel tracking activity could potentially serve as a neural marker of speech understanding in noise because it offers concurrent but separate index of the bring process of acoustic encoding as well as linguistic encoding. Based on electroencephalography or EEG measurement, our study aims to characterize the performance of these novel neural tracking activity. Specifically, we would like to know whether we can get a reliable measure of these tracking activity using EEG, even when realistic noise are present. For example, the multi-talker bubble noise, and also we would like to know what is its sensitivity to changes in background noise that are associated with the changes in both acoustic features and speech intelligibility. Here's what we did. We've recruited and tested 19 participants with all these requirements here. Also, we collected EEG data in a continuous way using a biosemi-active-to-system with 64 electrodes from all participants. The participants will be listening through a series of speech stimuli, and then towards the end they will be asked to provide their response on whether they heard all grammatical sentences or at least one grammatical sentence. After that, they will be rating the intelligibility or how much speech they could understand on the scale from one to five. The total duration of the experiment takes around two hours. Here's some details about the stimuli or the speech materials we've used. As you can see, it's quite different from the natural speech sentences because they are all specially constructed sentences following the same synthetic structure from the same format, like four syllable short sentences, and every two syllables can form a sentence. Through this way, we control the occurrence of the linguistic units in the speech. For example, the syllables will be presented always at this frequency, and while the phrases always present at this frequency, the same thing for the sentences. The sentences were presented at a fixed level of 65 dB. We've introduced the multi-talker bubble noise to change the SNRs to get different experiment conditions, so the SNRs were 5 dB, 0 dB, and minus 5 dB. Here's the result from the ratings. As we can see, when the noise level goes up, the speech intelligibility ratings goes down systematically, which means our manipulation or the introduction of background noise really changed the speech intelligibility that participants didn't adapt to the level of background noise after several trials of listening. Here is the characteristics of the speech stimuli. The thing to pay attention here is, even though we've introduced the different levels of background noise, we can see that across all different conditions, there is a very strong power modulation corresponds to the slap rate, although the level of these power modulation decreases as the background noise increases. Most importantly, there's no power modulation or no responses at the frequencies, corresponds to the phrases and sentences, which means in the later on EEG measurements, if we get any responses at the frequencies, corresponds to phrases or sentences, they will be purely from the mental process or the linguistic process, not from the acoustic cues. Here's the result from the behavioral tasks, and that's the detection of ungrammatical sentences. As we can see that for the speech-enquired conditions, we've got significantly higher accuracy rate, whereas when the noise level goes up, the accuracy rate also goes down, and towards the end, and the very difficult conditions, 0 dB and minus 5 dB, the accuracy rate didn't differ from each other. This is the EEG measured results. This result is averaged across all the electrodes and all participants. This is a grand average. We calculated the coherence between EEG measures and the atemplate signal, which has the three distinct frequencies, corresponds to the syllable rate, phrase rate, and sentence rate. What we can see here is when the level of the noise goes up, again, we see the reduction in EEG responses at all three different frequencies, but the thing to note here is at the syllable rate, this reduction is mainly due to the changes in speech acoustics or the increased level of background noise, whereas at the sentence level and the phrase level, this reduction are mainly due to the decrease of perceived speech intelligibility. From this experiment, we can confirm that a reliable brain tracking activity can be measured using EEG, even when there are background noise presented. What we've used here is a multi-talker bubble noise, and also what we've seen here is that the tracking response is really sensitive to changes in background noise levels. However, at different frequencies, these changes correspond to different mechanisms at the syllable level, it corresponds to the changes in speech acoustics, whereas at higher level linguistics, at the rate corresponds to higher level linguistics units, for example, the phrases and the sentences, the reduction or the changes are corresponds to an internal process which represents the perceived intelligibility that is free or separated from the acoustic cues. Thanks very much for your attention. If you'd like to discuss about this study or if you have any further questions, I would like to have a chat with you. Thanks very much.