 Thank you everyone for tuning into this session about realistic listening environments. Before we dive in, I will ask one question. What is the benefit of directional microphones? Let's review two papers I found in the literature. Here are my summaries. The paper on the left gives us convincing evidence that directional amplification improves a speech understanding in noise. Whereas for the one on the right, the evidence was not so clear. Upon revision of methodologies, I found that the one on the left was based on the standard laboratory experiment, whereas the one on the right was based on the real-life speeding experiment. Please keep this in mind as we go through this session today. So today, I will be talking about standard versus complex background noises that will help me to define what we mean by realistic environments, and then we're going to compare realistic versus real-world environments. What is the standard laboratory setup? The figure illustrates a standard laboratory setting we use at the labs. It comprises of five speaker array. Typically, a target is positioned to the front, and the noises are positioned around the listener. Often the target comprises of short sentences, and the noises comprises of many distracting sounds. The task of the listener sitting at the center of an array comprises of recalling the words in the short sentences. We often alter the level of the target in the noise in such a way that the listener can only recall about 50% of the words in the short sentences. We used these standard arrangements to evaluate hearing devices. The question is, do we ever experience standard setups in real life? To answer this question, we need to understand acoustic characteristics of these environments. The arrows in the figure illustrate the propagation pathway between the point source, the loudspeakers, and the listener. As you can see, they are very uniform. Let's compare these against a real-life listening situation. Here we have a list session of a real-life listening situation with many distant sound sources and near sound sources in relation to that we go reflecting surfaces. So what we observe is that the acoustic characteristics of a real-world listening condition are complex in comparison to the standard condition. The question is, how do we go about reproducing these complex acoustic characteristics in the laboratory? To do so, we require very complex acoustic setups. For example, the spherical array, loudspeaker array we use at the National Acoustic Laboratories. Once we have a setup like this, how do we go about selecting the type of material we want to play back in these conditions? One way to select material is to go into the real-world and record it. Once we record these environments, we can play them back into the spherical loudspeaker array back in the lab. The problem with this approach is that real-world environments tend to be extremely dynamic, and therefore it is extremely difficult to test devices with high sensitivity in the assessments. To be around that, we often use simulations. The destruction on the left shows a simulation of a restaurant comprising or reflecting walls, table surfaces. In addition to that, we have pairs of people conversing around the tables at different distances from the listener. When we reproduce these environments in the lab, people experience very natural and realistic listening experiences. How can we compare the two environments? To be able to do so, we need to recruit participants. So we have gone ahead and recruited a number of NOMA heroes and hearing impaired participants to conduct an experiment. What we're really trying to do is trying to understand the ratio between the target and the noises, in such a way that the participants can only recall about 50% of the sentences being presented to them in the two environments. The figure on the right shows the results. The X axis shows the results for the background noise in the standard condition and the Y axis in the complex background noise condition. The dotted line represents the unity between the two. The square represents the results for the NOMA heroes and the circles for the hearing impaired listeners. For the most part, the observations are well above the dotted line. What does that mean? It means that on average, people perform better in the complex environment than they did in the standard environment. In the NOMA heroes perform about 1.8 dB better, whereas the hearing impaired listeners approximately 4 dB better. Let's not consider the benefit. Here, measure as the difference between the added minus the added condition for the hearing impaired group. Here are the results. The X axis shows the results for the standard and the Y axis for the complex condition. Again, most of the results lie above the dotted line. What that really means is that people on average perceive greater benefits in the complex environment than they did in the standard environment, by about 1 dB in this experiment. It's also worth noting that the benefit range in the standard environment was a smaller that they would not observe in the complex environment. The question we might have is then, do we hear about 50% of sounds when we go about conversing in the real world? We will argue that's not the case. In the real world, perhaps it's more important to get the message that we people try to convey to us during conversations. A better way to measure that is by understanding the degree of comprehension people have in these environments. A group of now develop a test which we call the comprehension test, or the now DCT. DCT stands for Dynamic Conversational Test. This comprises from listening to speech passages and then responding to 10 item questionnaires related to the messages in the passages. The passages cover topics such as public transport information, headline news, information about tourist attractions, etc. The picture on the right shows a subject having listened to 5 minute passages and going about answering the 10 item questionnaire. Let's compare the two. To make a fair comparison between a sentence recall and a comprehension test, we would modify the target and the noise in such a way that the listeners perform about the same. At 50%. Here are the results. The X-axis shows the performance at 50% for the sentence test, and the Y-axis shows the performance at 50% for the comprehension test. We observe the most difference between the groups. However, when we look at the difference between the edit minus and edit condition as shown in the figure, we observe significant difference between the groups. However, we did not observe any relationship between the two either. So now, what is a realistic listening condition? We will argue that a realistic listening condition is one that encompasses the effect of distance time sources and reverberation, as well as a task that requires message processing and memory. We discussed this, and we argued that complex background noise, in addition to a comprehension test assessment, is a representation of a realistic listening condition. So how a realistic listening condition compares to a real environment? After this, the group of now recruited participants in pairs and asked them to perform a DC test in the real world at around 95% intelligibility. This was a more naturally communication condition for that environment. Then they reproduced that test in the complex background noise back at the labs. Now, real world conditions are still dynamic. So how can the two could be compared? To be able to do that, the group now developed what we call the now ecological momentary assessment tool on NIMA app. What NIMA enables to do is to score the performance of the listener at the same time record the background noises. We can use that information to match the complexity of the background noise in real world. With that, we can create in the complex laboratory experiment, set up back at the now. Here are the results. The x-axis shows the results for the real and the realistic conditions. The y-axis shows the scores. So on average, we did not see a significant difference between the groups. Resting to the x-axis is that when we asked people about the spatial awareness, we also did not see any significant difference between the groups. The spatial awareness were related to questions such as how big the room was and without note, they were able to identify multiple sound sources in the environment. So summary. The test performance we observed in the standard and complex background noise conditions were different for the same recall task. The added benefit measures in the work recall and comprehension scores were different, despite the fact we used the same background noise. However, the behavioral measures were comparable between realistic and real-life listening conditions. The take-home message today is that a realistic listening environment is a complementary research tool that helps us to understand using scientific methods the impact of hearing aid devices in the real world by emerging the listener in a real-world-like listening experience. If you want to know how we actually have used this technology to assess devices, please stay tuned for the incoming sessions here in Soundbytes. Thank you for listening.