 My talk is about virtual audiovisual environments for hearing aids evaluation and fitting. During my PhD project in Oldenburg, I helped to develop these virtual environments that are shown here in order to investigate the self-motion of people when listening to in everyday life situations. If you're interested in hearing also the acoustics, you can follow this link to do some videos on YouTube. So we modeled the following situations, the living room where the news was shown on the TV, lecture hall where there was a lecture, a cafeteria environment where there was a multi-talker conversation of people sitting around the table, a train station where there were announcements over the loudspeaker system, a street environment where there was also a multi-talker conversation at the bus stop, and finally a street situation where we measured passive listening of participants standing at an intersection and they were just told to wait there for someone to arrive. This is how it looked like in the lab. So of course the acoustic environment was presented over many loudspeakers. The visual environment was projected on a cylindrical screen. And we measured the self-motion of the participants while listening to the virtual environments with a head tracker and EOG electrodes. So when developing the virtual environments, we showed first that visual cues are important for self-motion. We compared the condition with videos of real persons to conditions with animated characters as shown here. And typical self-motion in such an environment with multi-talker conversation, for example in the cafeteria, looks like this. So plus here is the horizontal case angle of the direction the participants were looking at over time. We see the thin grain lines representing individual data of the participants. As you can see there is quite a bit of variance between participants. But on average they follow the location of the active speaker which is plotted in green. They follow this quite accurately. So this was the condition in the cafeteria with virtual characters. And if we look at the error between the direction the participants were looking at and the actual position of the active speaker, we see that for the video condition and the condition with animated characters that had some kind of lab movement, this is pretty much the same as so here they look about as accurately as the active speaker. However for the audio only condition the error is much larger. So here they don't really follow the active speaker anymore but mostly they're just looking straight ahead. If the animated characters were not moving their mouth then the error was also larger. So we've shown here that visual cues can influence the self-motion and also that they can steer the special tension in this case towards the direction of the active speaker. We've also shown that self-motion can influence the performance of directional hearing aid algorithms by estimating the broadband SNR benefits that different algorithms would have provided in these virtual environments under the self-motion of the different participants. And this is plotted here. So here we see a pretty large range and this is the range over the different self-motion trajectories of the participants. So if the participants were moving they had in one way or the other, this could have an up to 60B difference in broadband SNR benefit of directional algorithms. And this is averaged over the entire duration of the virtual environment. So if we would look at individual time points then this effect could be even larger. We also looked at the self-motion of hearing pair listeners and we found some interesting differences with normal hearing participants in the amount of movement that was done with the head and the eyes. So when looking at a certain direction people typically do part of the movement with their head and part of the movement with their eyes so they don't turn their head all the way. For the young normal hearing participants they did about 46% of the movement with their head. On average the older normal hearing participants did significantly more of the movement with their head meaning 57% and for the hearing impaired participants this was even larger so they did almost all of the movement with their head and about 90% flu. However we did not see a difference in hearing impaired participants who had their hearing loss compensated and were listening to output of an adaptive differential microphone. So the adaptive differential microphone had no effect on the self-motion of the participants. However we have to note here that the adaptive differential microphone has quite low directivity and also the benefit it provided in the virtual environment was quite low. So for algorithms with a higher directivity we do expect them to have an effect on the self-motion. So we showed that with the developed methods we can assess listening behaviour in more ecologically valid conditions in the lab by adding critical cues, increasing the acoustic complexity and including head movement. So this can at least partly explain the difference in benefits that was found and reported in literature between standard lab situations and real life. Then we have also found some points of improvement in developing these virtual environments. So for validation we checked if the environments would be realistic enough to allow participants to imagine being in the real environment. So we did this with a presence questionnaire and the participants reported it. At least to some extent they had the sense of being there in the virtual environment. And also the realism of the audio was judged pretty good by the participants. However for the video realism they said this was somewhat unrealistic. In the interview we asked specifically which points they felt were unrealistic. And they mentioned that objects especially the close objects looked unrealistic. This is probably because of the shading in the virtual environment. They mentioned that in the cafeteria the persons were mumbling and they wouldn't talk like that in noise. This is known as the Lombard effect that usually in the presence of noise people increase their vocal effort. However the conversation recordings were not quiet and this caused a bit of mismatch or even though the SNR was realistic it was still a bit too unintelligible. So this would have to be improved in maintaining the virtual environment. They also mentioned that they cannot liberate. This was because the lip-syncing model was a pretty simple model that also worked in real time. And currently there is work being done to approve this in Oldenburg. This is a PhD project of young Oldenburg and I'm sure that more institutions are looking into this. They also mentioned something about interaction. For example in the cafeteria environment they normally would just ask the persons to speak up. Of course this was not possible here. But it's also shown in literature that interaction can also affect the self-motion. So this would also be a factor for improvement. Finally we did not do yet comparison with real life situations. But this is currently going on in Oldenburg. So far the work I've been doing in my PhD project currently I just started at the Erasmus Medical Center in Rotterdam with Marie Curie fellowship. And in this project I am looking at improving the hearing device fitting with virtual reality. So I'm investigating if we can use the virtual environments in the clinic for the fitting of hearing devices. So the problem that I'm looking into is that the current fine-tuning process is based on trial and error. The audiologists make some changes in the settings of hearing devices and then the patient has to go home and try out these different settings. And this is of course pretty time consuming. But with virtual reality new settings of hearing aids and cochlear implants could be tested right away in relevant everyday life situations in the clinic. And by using the pairwise comparisons for the fine-tuning, the settings could then be better adjusted. So as I said I'm doing this project at the Erasmus Medical Center in Rotterdam with André Goedde-Geburen. And also the University of Oldenburg is involved as a collaboration partner with Volker-Hommel and Iso-Krim. So what do we have to do to implement these virtual environments in the clinic? First of all, we have to present them using VR glasses and a loudspeaker array. So this makes it much more portable. We cannot set up such a big cylindrical screen in a clinical environment. And loudspeakers are still needed because we are presenting it to people with hearing devices. So in order to do this we also have to switch to a different game engine. But the good thing about this is that this also improves the shading of the virtual environment which could help to make it more realistic. Another important point is the communication between the patient and the audiologist. This should be possible while being in the virtual environments because you cannot just take off the VR glasses all the time. That would be pretty annoying. And this would of course also add some interaction to the virtual environment which could help for the realism. Another important point is that we have to reconsider the selection of relevant situations. So Karolina Smates and co-authors have this publication on which listening situations occur in the daily life of hearing impaired persons. And they show that not only speech perception is important, so speech perception is here. Speech communication is above 31% of the cases. But there is also focus listening situations which partly speech but can also be music and non-specific listening situations which is monitoring the environment or passive listening. And we also did a survey with patients where they had to indicate in which listening situations they want to improve their hearing. And speech situations are mentioned most but they also mentioned non-speech situations for example listening to music. Also participating in traffic is mentioned quite a lot and monitoring the environment so that they know what's going on around them. So these are important situations to include also for the in the fitting process. And finally there are more and more hearing aids and also cochlear implants with automatic recognition of listening situations and the selection of the corresponding programs. Of course if we are fine-tuning the hearing devices we should fine-tune each program in the corresponding situation it was intended for. In order to do this I will ask hearing aid manufacturers for more information on these programs and which situations they were intended for. So if you're listening please go and contact with me. Then of course we also want to evaluate this if the virtual reality really adds something to the clinical practice. We think that it could make fine-tuning process faster because we can reduce these periods of trying out settings at home. We can increase the patient involvement which hopefully also makes them more satisfied both with the fine-tuning process and with the hearing devices. And we hope the hearing devices will function better. And in order to test this we also need more ecologically valid evaluation methods which is a challenge for this project. So this is what are we working on for the next two years and I hope that next time I can report some progress on this project. Thank you for your attention.