 Thank you for coming to our presentation. Our title is Effect of Selective Loss of Auditory Nerve Fibers on Temporal Envelope Processing a Simulation Study. Cochlear synoptopathy is a selective loss of auditory nerve fibers due to sublimit noise exposure or aging. Normally speaking, auditory nerve fibers can be classified based on their spontaneous rates. Fibers with high spontaneous rates have relatively low threshold. They are sensitive to soft sound. They usually encode envelope information only up to about 40 dbspl. Meanwhile, fibers with low spontaneous rates have relatively high threshold. They don't respond to sound until the sound level reaches about 40 dbspl. And they encode envelope information at relatively higher intensities. Under cochlear synoptopathy, fibers with low spontaneous rates are selectively damaged, which could cause a series of issues, including supra-threshold temporal envelope processing. However, when people try to find this issue in human listeners, the results are usually inconsistent. Some have attributed this to poor control of participants who may just have very different noise exposure history. Some consider that the measure of temporal envelope processing are not sensitive enough. And others suspected that synoptopathy simply doesn't exist in humans or the assumptions about low spontaneous rate fibers are incorrect in the first place. To better understand these issues, our current study uses a computational model to first study the role of low spontaneous rate fibers in temporal envelope processing. We then look into whether different choices of task parameters will affect the sensitivity of the task to reflect synoptopathy. Here, we use something called the MAP Simulator. The MAP Simulator includes an encoder bit and a decoder bit. The encoder bit was developed by Professor Ray Metis and was used to simulate different hearing pathologies. It contains a physiologically detailed model of the auditory periphery, all the way from the middle ear into the lower brainstem, including the effronter reflexes. We used all 30,000 auditory nerve fibers and spanned them across 30 best frequencies from 56 to 8,000 hertz across three spontaneous rates. Another encoder bit, we simulate four hearing conditions, including normal, in which cases all auditory nerve fibers are activated. We then simulate no low spontaneous rate, which is to remove all low spontaneous rate fibers to simulate cochlear synoptopathy. The other condition is we remove all high spontaneous rate fibers as a condition in contrast to synoptopathy. And the last condition is to remove both low and medium spontaneous rate fibers to create a more severe case of synoptopathy. Then in the decoder bit, we converted the neural signals from the auditory nerves into acoustic waveforms, using a bank of gametone wavelets and some processing expansions. In this way, we can present the waveforms to normal hearing listeners and evaluate the perceptual impact of cochlear synoptopathy in psychophysical task. We have three tasks for temporal envelope processing. The first one is amplitude modulation detection. We had 12 normal hearing young adults participated in listening to carrier tones at 500 hertz, modulated at 16, 32 or 64 hertz. We measure the modulation detection threshold using a three interval two alternative force choice paradigm under an adaptive procedure. The second task is to recognize natural speech in modulated noise. We have 16 participants listening to IEEE sentences that are spoken by a female voice and presented in 32 hertz modulated noise. The signal to noise ratio of the noise was from negative 18 to 60 dB. And we measure speech intelligibility at each SNR using 20 sentences per SNR. Then in the third task, we ask participants to recognize unvoiced speech in modulated noise. These sentences are still IEEE sentences, but we processed to sound like unvoiced or whispered using a tandem straight vocoder. And so in the unvoiced version, the speech intelligibility only depends on the spectral temporal envelope of the speech. The signal to noise ratio was set from negative 12 to 12 dB. And the speech intelligibility was measured in the same way. So here are the results. Under conditions where we remove all high spontaneous ray fibers, the performance didn't really change too much compared to normal hearing conditions. However, when we selectively removed low spontaneous ray fibers, the threshold got much worse for all three modulation rates. We also found that the amount of difference between the synoptopathy condition and normal condition got smaller with increasing modulation rates. Then for speech tasks, we also observed a similar pattern for the amplitude modulation detection in that when we remove all high spontaneous ray fibers, the speech intelligibility at each signal to noise ratio didn't really differ much from normal conditions. However, when we remove low spontaneous ray fibers, and create cochlear synoptopathy, the performance got slightly worse for natural speech recognition and got a lot worse for unvoiced speech recognition. And if we look at the signal to noise ratio at 50% intelligibility, we notice that the amount of degradation in SNR 50 was about 1 dB from normal to synoptopathy condition under natural speech recognition and about 4.6 dB under unvoiced speech recognition. So our conclusions are that through simulation, fibers with low spontaneous rates do play an important role in encoding suprathar should envelope. And when we try to identify cochlear synoptopathy using temporal envelope tasks, not all the parameters are sensitive enough to reveal cochlear synoptopathy. For example, in amplitude modulation detection, 16 Hz modulation seems to be more sensitive than 64 Hz modulation. And for speech tasks, we recommend using unvoiced speech recognition rather than natural speech recognition in modulated noise because recognizing unvoiced speech relies on spectral temporal modulation of the speech only. While recognizing natural speech uses both spectral temporal envelopes as well as pitch cues, so there are a lot of redundancy in the speech. Hopefully this study provided some insight into what parameters to use when we study cochlear synoptopathy in humans. Thank you very much.