 Hello, I look forward to telling you about our recent project that involves modeling formant frequency discrimination using computational models for the auditory nerve and midbrain. As you know, auditory nerve responses are often characterized in terms of average rates, phase locking to fine structure, and we've recently been focusing on a different aspect, slow fluctuations that can be seen in histograms. At the right, you see responses of a model auditory nerve fiber to two stimuli, and the responses have the same average rate, they're in fact saturated average rates, the temporal fine structure phase locking is comparable, but they're qualitative large differences in the neural fluctuations as highlighted by the red lines. These are interesting, it's an interesting feature of responses because they code many aspects of complex sounds. This illustration shows that spectral peaks, for example, because of saturation of the inner hair cells results in flatter lower fluctuations compared to channels that are tuned to frequencies between the spectral peaks. So these differences in fluctuation amplitudess occur because of nonlinearities in the inner ear, in particular capture of the inner hair cell response, which flattens the fluctuations at spectral peaks. They're interesting to us also because although this is another illustration of differences in fluctuation amplitude across different frequency channels in the auditory nerve, again, having basically the same average rate, but at the level of the midbrain where neurons are sensitive to fluctuations, in particular modulations, these differences in the auditory nerve fluctuations result in large rate differences. Again, this is an interesting way to imagine complex sounds being encoded. We hypothesized that these features might play a role in encoding vowel sounds, for example, as well as other speech sounds. The cue is very robust across a wide range of sound levels and a noise, but it depends on nonlinearities and therefore it's vulnerable to hearing loss, which would be interesting for this community. So here we're using computational models for the auditory nerve and midbrain to predict form and frequency discrimination in listeners in a dataset that we presented recently at ARO. So the behavioral methods were to use an adaptive track to estimate thresholds for discriminating either F1 or F2 using synthetic two-formant vowel-like sounds based on clat. The pitch was always 100 hertz, presented at 75 dB SPL. The standard stimuli always had a formance at 600 and 2000 hertz, and we varied the bandwidths as illustrated here. About 100 hertz would be typical for the clat-based formance in these frequency ranges, but we made them narrower or broader, anticipating that performance would be worse for broader formance. In testing, the listeners heard two pairs of sounds and had to detect the one that had a target, and our listeners had either normal hearing indicated here by the black lines in the audiograms or mild sensory neural hearing loss indicated by the red lines, and then intermediate listeners marked here in blue with an interesting steep sloping loss. These colors will be maintained in the later figures. To model the thresholds, we get responses of our auditory nerve and midbrain neurons to the standard stimuli. We're using a population of neurons that are just seven characteristic frequencies spanning a third octave centered on either F1 or F2. The internal noise was simply due to spontaneous activity of the model auditory nerve fibers, and that IC model neurons were those that are excited by fluctuations, so-called band-enhanced neurons. Therefore, you see at the spectral peak where there's a flattening of the auditory nerve responses. Even though there's an increased rate in the auditory nerve, you see a decrease in the IC response because there are smaller fluctuation amplitudes there. To simulate individual listeners, we introduced the information from their audiograms into the auditory nerve model. To create the threshold estimates for the model, we used a template that was based on many repetitions of the standard stimulus either for the auditory nerve or the IC model. We then computed the responses to each trial, the two intervals differing by a change in the formant frequency, and then selected the one that was most different from the template using a Mahalanova's distance type metric, and then determined whether the model got that trial correct or incorrect, tallied the score across many trials, and then varied Delta F to predict percent correct versus Delta F, and then fit a logistic to estimate the threshold. These are the behavioral thresholds for F1 and F2. Note the different vertical axes which are preserved on the following plots. The symbols for each bandwidth are ordered by hearing loss, so there's using the pure tone average thresholds. As expected, the listeners with hearing loss, which are the red symbols, had higher thresholds than the normal hearing listeners, and the thresholds generally go up with increasing hearing loss. Now, if we add the model thresholds from the auditory nerve, these are based on the excitation patterns of the auditory nerve population response. For many conditions, especially F1, there was no threshold for the model is not able to do the discrimination over the range that we tested. In a few cases for the listeners with normal hearing in the F2 range, we were able to predict thresholds, but they were higher than those of the listeners. We could do more to explore the ability to use the auditory nerve, but we've moved on to examine the fluctuation cues using the IC model. In this case, starting with the F2 results, thresholds are lower than the listeners, which is good. We focus on the trends rather than the absolute model thresholds, which the absolute thresholds could be adjusted by adding internal noise, for example, as long as you are trying to adjust them upwards. In all cases for F2 discrimination, the thresholds had appropriate trends, although they're always lower than listeners thresholds. However, for the F1 discrimination, there's an interesting thing in that the model thresholds for the listeners with sensory neural hearing loss are actually lower than model thresholds for normal hearing. This is better shown here where we plot model thresholds versus subject thresholds for each format in each row and for each bandwidth. You can see that for all of the F2 cases, the model thresholds and subject thresholds are well correlated. That's also true for the broader band F1, but for especially the narrower than typical bandwidths, the sharpened format for F1. You see the opposite pattern. What happens here is that, as you would expect, the impaired ear actually has larger fluctuation cues, but the model takes advantage of these and does better in that task. But for this task, listeners do not seem to be able to use stronger cues. In fact, the profile across the population is distorted. Even though the cues are larger, they're not in the correct profile. What we need to do moving forward is incorporate the distortion of the overall pattern of the response into these model predictions. In conclusion, the listeners thresholds increased with bandwidth and degree of hearing loss, as expected. The auditory nerve model thresholds based on excitation patterns exceeded those of the listeners. In many conditions, there was no threshold for the model. The IC thresholds were always present and were lower than those of the listeners, which is good. Again, we focused on the trends, which in general matched those of the listeners nicely, except for narrow bandwidth, the sharp formants down at the F1 frequency, in which case the model depends on large fluctuation cues, which serve the model even if they're in the wrong channels, but the listeners do not seem to be able to use that cue. We're continuing data collection and hope to compare these results for individual listeners with their speech and noise tests results. We think this test might provide a nice alternative to speech-based testing, which can be difficult and time-consuming, and this task also lends itself nicely to quantitative analyses. Thank you, and I look forward to the opportunity to answer questions.