 Hi, my name is Nils von Toblen and I'm a research manager here at Ericsson Research Center. First, thank you to the organizers of the Virtual Conference on Computational Audiology 2021 for giving me the opportunity to give this talk on learning from audiological data collected in the lab and in the real world. Parts of this work were supported by the European Union through the Evocean grant and that is hereby acknowledged. When we look and talk about audiological data, the first thing that we need to actually consider is that audiological data is actually a vast collection of different data types. So here's just some examples. Hearing data, the data that describes people's hearing, the environment data that describes the sound environments where people are actually taking part in communication and actually their everyday life. The data that describes the usage and the preferences that people with hearing loss experience when they wear hearing instruments. The sound data that is becoming more and more important in actually developing new hearing instrument algorithms. And then, because this is an exclusive list, there's actually also some future data that's not part of this list. But when we look at these data as a whole and on the computations and the learnings that we can apply to this data, this is actually what is providing some new insights that enable us as an industry and as an academic community to improve the life for people with hearing loss. Let's first take a look at the hearing aid data. Actually, next year we have an anniversary because it's 100 years ago since the audio meter was first presented and also with that the definition of the audiogram. And then in the 30s, two important events took place. One is the Fletcher-Monson curves, the equal loudness contours. So a lot of data that describes the loudness of different frequencies for different people and then being compiled into some equal loudness contours that describe how the general population actually perceive loudness in different frequencies. And at the same time also perhaps it's just a special case of the loudness contour but we also then had the definition of what is normal hearing. So the zero to be hearing loss. Omitting a lot of interesting hearing data, we also see some interesting other interesting hearing data here in the last decade. Some really interesting work taking place in the US where Liz Masterson and her co-workers looked at characterizing the impact of occupations on people's hearing loss across a lot of US industries. And also the test audiograms that we put into hearing aids when we test them and validate them, that was devised by Nikolai Biscault and his colleagues from a big pool of audiograms collected. So really a really inspirational source, just the audiogram. And we have been looking at it for 100 years. So if we now go to the next data type, that is the sound environment data. This is actually a good example of where data from the real world is influencing the way that we work in the lab. Back in 2015, Karolina Smith and her colleagues presented some work on the signal to noise ratios in realistic sound environments that people were exposed to. As you see on the graph to the left, we have different sound environments on the X axis and then the SNR on the Y axis. So most of them are actually positive, fine to the positive, so 5, 10 to be positive SNR. And not so many, if any, around zero to be or less, which we have a lot of research that is actually taking place there. So here we really see how real world data is influencing how we conduct hearing research and development. On the right-hand side we see an example from the evosion study where we compiled an overview of the sound environments throughout the day. So on the X axis you have the time of the day and on the Y axis the distribution, the relative ratio between the four different classes provided by that classification. So quiet, noise, speech and speech noise and how this varies across the day. We see that most people are in quiet in the night and in the early morning and late in the evening and then there's a large increase in the amount of speech environments throughout the day. There's a little dip around pre-dinner time where it's actually the quiet that comes up as being the most present. What we have actually seen is that this profile across the days is stable when we also look into different cohorts. When we tap into another set of usage data but now we just look at how loud the sound environment is and then how good the sound quality is. Look at this graph. From left to right we have how loud it is so it goes from quiet to loud and from bottom to the top. It is the sound quality and this is actually a proxy for SNR from bad to good. We have some reference conditions on the perimeter of this starting to the far right. We have in blue a rock concert. A little closer towards the center but still in a loud environment in brown. We have a buffet at a restaurant with cutlery and what have you not. To the far left we have the quiet sound environment where a person was just sitting in an office working all by himself and then in the top there is a meeting between three people and with a really good meeting behavior. Only one person speaking at the time so a really good signal to noise ratio. Then in the middle we see the distribution of sound environments that people with different kinds of hearing loss experience through the day and here we can begin to see traces of some differences between the sound environments that people with different kinds of hearing loss experience or perhaps a foster experience because of their hearing loss. We can simply just here with this data begin to tap into how the sound environments, your behavior and your hearing loss is influencing and is in a relationship and it seems here that people with a more severe type of hearing loss either go for a little more quiet or a better sound quality than those with lower thresholds, you mean better thresholds in that respect but this is just the first tap into this data. We can also look at the same data and then combine it with different processing strategies or programs and then learn more about that. So to the left you see a figure that resembles what you saw before. Now it is the sound classes that are distributed into four different plots and along the day we see relative distribution of how much the medium which is the default and low and high help in noise reduction was used. Of course we have seen that many, many times that the default program, the medium program is the one that is used the most time through all the different sound environments throughout the day but we see a distinct bump in the use of the high, high plus noise reduction in late afternoon, early evening, around dinner time in noise conditions but done in the other sound environments. So really a really deep insight into how hearing aids provide helps and how different types of hearing aids or different programs provide help at different times. To the right side we have a figure from a recent study published by Claudia Andersson and Tobias Nier and my colleague Jebe Christensen that shows the relationship between ecological momentary assessment and SSQ-12 and the EMA data that you see here is actually corresponding also to the questionnaire data obtained from SSQ-12 but the EMA data provides a deeper insight into the preference because we can begin to understand what kind of sound environments are actually driving the preference and we have the data to actually verify that the SSQ data also corresponds to the sound environment data, the real usage of the hearing devices. So really a quite inspirational look into combining questionnaires and also ecological momentary assessment. And then finally we have sound data and that is really becoming important in developing new hearing instrument algorithms these days. When we look to the right-hand side of this slide we see how we have developed a new hearing aid algorithm that was based on training the algorithms. So this new algorithm was trained on 12 million real-life sound scenarios where it was trained to prioritize speech while preserving as many of the other auditory cues as possible. But in reality what we see here is that we have an algorithm in a hearing device out there in the real world, people can wear it right now. And in that hearing instrument it is AI, it is an algorithm that was trained using machine learning AI that is processing the sound. It's not doing all the processing but it's actually doing the processing in some very, very critical situations. So we actually now see that we have data that we can use to develop new ground-breaking hearing instrument algorithms that will enable people to segregate voices but also already now we see that using sound data, realistic sound examples and having big pools of data is actually already now improving the algorithms in hearing instruments around the world. Summarizing. I've presented to you a number of different audiological data collected in the lab and in the real world and given examples of how that is being used in hearing aid research. We have looked at the sound data, the audiograms, the equal loudness curves. We are still using them. We have looked at the sound environments and how tapping data from hearing aids provides a new and deeper insight into the sound environments that people with hearing instruments they are exposed to. We have also seen how the same data combined with different programs and still an ecological momentary assessment or ratings, how we can use that to actually learn more about hearing instrument algorithms work in which environments. And then finally also showing how sound data can be used to actually develop new hearing aid algorithms and all of that together with the new and the futuristic audiological data that will come. That is what enabled us as an industry and as an academic community to improve hearing for people with hearing loss. Thank you for attending my talk.