 Hey everyone, my name's Tim Brochet and today I'll be talking about a fully computational model of CI speech perception. The primary goal of our lab is to improve speech perception for cochlear implant listeners and typically we'll do this by trying out a new processing strategy or stimulation technique and on a group of CI listeners and seeing if we observe any benefit. This type of experiment has a number of issues and challenges which I've laid out here below. We've got ethical issues, we don't want to cause any maladaptive plasticity, we've got bias issues, people have years of experience with an old strategy before we try out a new strategy. There's lots of sources of variance in these studies that might be unrelated to the actual ability of a processing strategy to transmit information. There are logistical factors behind organizing take-home trials and double-blinding experiments and ensuring that you have a sufficient number of people. There are of course cost and time issues and the question of how to optimize the processing strategy before trying it out in a group of CI listeners. So for all these reasons we think that we need to develop a computational model of cochlear implant speech perception to try out these processing strategies before going into these costly studies with CI listeners. So our research question is can we replicate phoneme level CI speech perception patterns using a computational model. Our computational model starts off the way that all CI processing strategies do by doing a time frequency decomposition. We then extract the envelope in each channel and use it to modulate a train of biphasic pulses. We then send those biphasic pulses to a finite element model of voltage spread and by extracting the voltage at different points in the cochlea we can use it to activate neural models and we model 1500 different neurons along the length of the cochlea for a large corpus of speech and then we can generate a lot of neural activation patterns for a speech that is labeled by phoneme. So typical automatic speech recognition neural networks are trained by a spectrogram representations of speech and what we've done is we've trained an automatic speech recognition neural network on all of these neurograms and we expect that we'll get similar phonemic information transmission in these neurograms because we're degrading information in a similar way to what happens in cochlear implants. So for the results please come to our talk and thank you for listening.