 Hello, I'm happy to kick off this session with a description of our web app designed to help visualize population responses of auditory nerve and midbrain models. This project was kicked off by a call from NIH, the National Institutes for Deafness Communication Disorders, a call to develop cloud-based auditory models. And that motivated us to take our pre-existing computational models and move them onto a web app that could be hosted on a public machine. This web app allows you to run simulations, download the results as figures, soon we'll be downloading them as data files. And the first app that I'll discuss focuses on the responses of auditory nerve and midbrain neurons. To most of the standard psychophysical stimuli, as well as audio files that a user can upload. So the ingredients of the code that's hosted on the site are auditory nerve models, which basically take an arbitrary stimulus waveform that's scaled in Pascal's so it can be matched exactly to a psychophysical or physiological experiment. And the output of the model is a rate function, which is proportional to the probability of firing of an individual neuron as a function of time. The current model is that of the Jelani et al 2014, but another model that's hosted in the web app is the more recent model of Bruce et al. And we're planning to upload future models as they're developed such as a new one that our labs developing that includes eference. The output of the model can be used as the input to higher level models such as a midbrain model. Simply by taking the output using it as the input to that model and then finally we get a rate function from the responses of midbrain neurons. The models currently hosted on the web app focus on including modulation tuning, which is a key feature of IC neurons. These published models are for single neurons and the key parameter of course is the CF or center frequency. But a major goal of this web app is to help visualize population responses. So we presume listeners or behavioral animals are using their entire population of neurons to do a task. So it's important to have some representation of that population response if we want to try to predict those responses as well as to understand effects of sound pressure level and hearing loss on the global response of the population of neurons. So going from single neuron code which is widely available to looking at populations. It can be a problem for for people not as experienced with programming so we wanted to make that much more accessible. And we the tool for doing so we call you are here for the University of Rochester envisioning auditory responses. So key is going from single fiber responses such as two responses shown here response to a sinusoidally modulated tone. Some frequencies have very strong beating responses others don't. But we can pull those into a population response to introduce this the format of the population responses that will show. I'll illustrate this with a little video so this shows several responses auditory nerve fibers different CFs all shown as a function of time. And they're rotated from a side view to perhaps less common top view looking down. So again if we go back and look at it. This response you can see that those the single fiber responses here were mapped into rows within this population response which now has bf as the vertical axis or the characteristic frequency and time along the x axis. You can see that some of the channels beat strongly as does this channel others have a flat response. Similarly the IC responses will be mapped into a population response. And just again to illustrate how that's done. We have an image that goes from the so called side view where you see responses as a function of time for many bfs, and gradually rotate it so we're looking down from above. And that's what's illustrated here. So again some channel the channels that beat very strongly are shown with the strong warm colors. And this channel which is actually tuned at the carrier frequency, but has a very weak response because his inputs are not fluctuating as shown as the dark stripe here. So now we'll look take a look at the actual web app, illustrated here. The site is you are here. We'll have the link in a later slide. But when you run this app. It opens up to show three main panels. The left hand most panel is for the stimulus parameters. As I mentioned you can upload an audio file here, scale it sound level, and so forth, or select one of a number of different stimuli. And when you select different stimuli, the appropriate parameters are made available for entering the middle panel describes the auditory nerve and inferior calculus models. So those can be selected here, the numbers of fibers that would be averaged within each frequency channel, the range of frequencies and number of channels, species and spontaneous rate are all selectable. And for this community, I think an interesting feature is the ability to enter an audiogram, either for an average listener or for an individual. In order to see the effects of hearing loss on the responses of the neurons. The IC models available here to simple models, and a best modulation frequency. When the model is run. We'll see here. This was the response to two vowel sounds, which can be a visualized here by taught by clicking on the plots you toggle between two conditions if you did enter two conditions. The left hand most plots illustrate responses to the stimulus spectrogram and the spectrum, the middle column is the auditory nerve and I see responses. And on the right are average rates over time so the average of each channel over time is illustrated here for the auditory nerve and the IC. Bottom most plots are we refer to as the wide display, which show just a larger view of the stimulus waveform over time spectrogram, the ice in our hair cell model auditory nerve and midbrainer IC model. The top of the GUI are buttons for accessing a manual that's on our OSF website for looking at FAQs when we get them will post answers there and contact button for sending questions. So what's next is to develop some more apps beyond this this basic visualization tool that I showed you were interested in illustrating longer responses to stimuli such as music and speech to providing an app that will give easier access to our physiological data that could be plotted and compared to the model responses and also a tool to estimate psychophysical responses from the population model responses. These are some of the apps that would allow you to gradually change sound level and degree of hearing loss for example, we're open and interested in getting suggestions for new or different types of applications that we might be able to develop. The app is located at this link and the open source code is available on our OSF website. Thanks and I look forward to questions.