 Hi, my name is Mark. At a high level, this work combines state-of-the-art peripheral auditory models with artificial neural networks to model the speech recognition difficulties of humans with hearing loss. While a great deal is known separately about the peripheral physiology and the perceptual consequences of sensory neural hearing loss, our understanding of how the first drives the second has been limited by a lack of computational models that can perform real-world auditory tasks as well as humans do. By combining highly detailed computational descriptions of the auditory periphery with deep learning, we aim to address this need. Deep artificial neural networks optimized to perform auditory recognition tasks from simulated cochlear input have recently been shown to replicate aspects of human auditory behavior. Here, we extend this approach to investigate how damage to peripheral auditory structures can account for difficulties recognizing speech in noisy environments. Our use of a detailed peripheral auditory model allows us to plausibly simulate hearing loss in our network's ears. For instance, we can simulate loss of outer hair cells, which results in broader frequency tuning and reduced responses to quiet sounds in the periphery. Similarly, we can simulate loss of inner hair cells and auditory nerve fibers and then investigate the effects of these changes on the speech recognition behavior of our deep neural network. To see the results of some of these manipulations, I hope you'll stop by the main topic. Thank you.