 So, hello everyone. Thank you for joining this session today. I'm Vicky Chiang. I am senior research audiologist working at National Crystal Laboratories. It's my pleasure to talk about our recent project which my colleagues and I did recently to explore how well can people understand speech via video conferencing platform. And I believe during the COVID pandemic we have realized the benefits of using Zoom, Microsoft Teams, Google Meet and etc. It saves a lot of time for traveling, right? And it's very quick and easy for us to scheduling our meetings and attending virtual conferences or workshops like today. And it's increased efficiency to bring remote workers together and keep networking. I think the communication via the video conferencing has now become very common and important in our daily life. For no more heroes, we seems have already got used to this modern way of communication and probably would rather work with this flexible way, right? And for most of the times we think that we can communicate well with people in the video course, although you may feel very tired after the whole days of online meetings. But how about people with hearing loss? How well they can communicate via the online platform? Do they have the listening problems when communicating with people online? And do they need additional helps while doing the video course? In this sunbite I will address three learning objectives to see how well people with and without hearing loss understand speech both in person and via video conferencing platform and to understand people's level of acceptability of using video conferencing-based audiology course speech assessment. And also to explore a question from hearing health professionals that can speech tests be conducted reliably using video course? Okay, so in audiology field we have known that highly audiology offers a range of benefits for clients and audiologists, mainly for conducting video consultations, this type of non-diagnostic services or using remote app to help with adjusting hearing aids. But as far as I know there are currently no validated tools for assessing speech communication outcomes in the context of a video call. So the key aims of this pilot study were to use existing materials and now to develop an assessment battery for use with video conferencing platforms. And we would like to explore how video calls impact on the ability to understand speech and follow conversations for people with and without hearing loss. And whether hearing aids can improve the experience for people with hearing loss when communicating in the video conferencing platform. The hypothesize were first thing people with hearing loss may communicate less effectively or spend more effort communicating via the VC platform compared to the normal hearing aids. And hearing aids may improve the performance of people with hearing loss when communicating via the VC platform. And people with normal hearing will have good test-to-test reliability of the speech communication performance in the video calls. And lastly participants will show a high degree of accessibility of VC-based audiological assessments. We recorded 32 adults, half of them were normal hearers and the other half were hearing aids users. All the normal hearing group of people had passed hearing tests and no reported hearing difficulties in daily life. And all the hearing aids users had now to moderate hearing loss in their battery gear. And all the participants have a superior English level. The assessment materials that we used include standardized audiological assessment tools such as BKB-like sentences list in quiet and noise conditions. This is to evaluate speech perception performance. And the now dynamic conversation test, the now DCT test, this is to evaluate speech comprehension performance. And we also used a self-reported questionnaires to capture participants' listening difficulties and effort, mental workload, satisfaction of sound quality and overall rating on acceptability of VC-based audiological administration method. Just to give you a bit of background on the testings for those who may not familiar with these assessments, for the speech perception task in quiet, we present 16 sentences via a loud speaker in front of the participants and let them to repeat back as much of each sentences as possible and then calculate the percentage correct of the target words in each test list. And in the noise condition, the target speech sentences and competing bubble noise are also presented from the same loudspeaker and level of the speech and bubble noise are varied according to whether a participant could repeat more than half of the key words in the sentence correctly. And we use an adaptive procedure to measure the signal to noise ratios in DB to represent people's speech reception threshold for 50% correct. For the speech comprehension task, the loudspeaker task participants listen to two conversation passages in bubble noise with a fixed 9 dB signal to noise ratio, and they need to answer some comprehension questions after each passage. For study design, each participant was listed in face-to-face and via video conferencing platform during one appointment and loudspeaks. And the participants were allocated remotely to either conduct face-to-face or video conference condition first to balance the impact of practice effects. In the VC condition, the normal hearing group were tested twice to investigate the test retest reliability of the test of the assessments via the video conferencing platform. And participants with hearing loss also completed the test twice and without wearing hearing aids so that we can investigate the impact of hearing devices on their performances. For each testing condition, we conduct one BKB sentence list in quiet and two DCT noise, two passages of the now DCT test in noise, and then ask participants to complete the questionnaire. This diagram shows the experiment setup in the face-to-face and video conferencing sessions. The face-to-face assessment were conducted with both the participant and the researcher in the same room, and the speech materials were presented directly from a loudspeaker connected to a testing laptop. This is the same as a typical laboratory experiment. In the video conference assessment, the participants sat in the same room as in the face-to-face condition, and the researcher sat in an adjacent quiet room. The video conference calls are established between two testing laptops, and the test materials include the speech and noise were sent over via the Zoom video platform, and we presented the speech and noise from the same loudspeaker. Now let's have a look at how well people with and without hearing loss understand speech both in person and via the video conference platform. Let's have a look at the speech perception performance first. In quiet condition, the top table shows that for normal hearing group, the BKB quiet scores showed 100 percent correct across all three conditions. For normal hearing group, there are no significant difference on the scores between the face-to-face and VC condition, which is the ADIT condition. They were both pretty high, above 99 percent correct, and there were also no significant difference between the normal hearing group and hearing paired group. However, the results in the VC2, which is the ADIT condition, show significant worse than the face-to-face and ADIT condition. This makes sense, right? But from another perspective, they suggest that the BKB quiet task was sensitive to the benefits of hearing aids when tested in video conferencing platform. For BKB in noise test, as I mentioned earlier, we used adaptive rule to measure the signal to loss ratio in dB at the 50 percent correct of speech reception threshold level. And this figure represents the means and 95 percent confidence level of the signal to loss ratio in the three testing conditions. On the white axis, the lower signal to loss ratio means the better speech perception score in noise. And the fielded black circle and open square represents the results for normal hearing and hearing paired group respectively. For both groups of people, on average, participants showed significantly lower signal to loss ratio, which is a better score than BKB noise performance in the face-to-face condition compared to the two VC conditions. And when we compare the SRT scores in the two VC conditions for normal hearers, there are no significant difference on their performance. However, the hearing impaired group showed significantly better score when people communicating well with hearing aids, the VC condition, a VC1 condition compared to no hearing aids condition, the VC2 condition. When we compare the results in VC1 condition, we can see that with the hearing aids on, even for people with hearing aids on, their speech perception score was still poor than the normal hearers. How about the speech comprehension performance? Similar to the speech perception noise results, this figure indicates the means and 95 percent competence level of the average DCT percentage scores from two passages. So on the wet access, the higher score, the better. For people with normal hearing, the DCT scores were similar, which were all above 85 percent, regardless of the face-to-face and VC conditions. And there are no significant difference on their performance between the two VC, two video conference conditions. The hearing impaired group shows significantly worse scores for the two VC conditions compared to in-person. And there are significant difference between the aided and unaided conditions when communicating via the VC platform. Also, we can see in the VC condition, again, even when people were wearing hearing aids, their speech comprehension scores were still poor, worse than the normal hearers. If you still remembered, one of the hypothesized was to see if people with normal hearing would show good test retest reliability of the speech performance via the VC platform. To do that, we also conducted BLAND Outman plots to evaluate the testing agreement between the two video conferencing conditions. In the BLAND Outman plot, an individual's average score on a signal to noise ratio or DCT measure, which is the X-axis, is plotted against their different scores on the measure, which is the Y-axis. And on BLAND Outman plots, the solid line in the middle represents the average differences, which is the bias, and the dashed lines there represents the upper and the lower 95% limits of the agreement. We can see that the BKB or the DCT measures shows relatively symmetry in points above or below the zero, and the mean bias value on the plots were close to zero for both tasks. This indicates that participants didn't perform better or worse in a particular condition or in another word, which means that normally, here, participants show very good test retest reliability via the video conferencing platform. In addition to this, the comparison of the two VC conditions also suggests that we can detect changes as small as the step size of 2 dB, so the VC condition hasn't compromised the sensitivity of the test. Okay, so far, we have looked at people's speech communication performance in face-to-face and video conferencing-based administrations by using the standardized audiological measurements. Now, let's have a look at how people feel the ease of communication for both in person and via video conferencing platform based on their self-reported responses. Now, let's have a look at people's listening effort. The term listening effort is used to describe the mental workload or energy a listener may be required to allocate cognitive resources when trying to extract meaning from a speech signal. And recently, listening effort has been considered as an important outcome measure, and it can be measured in different ways, such as using subjective tools, electro-physiological tools, or we still task paradigms. And in this study, we asked the participants, we used the self-reported questionnaire, and we asked the participants to give two ratings. Based on first thing, the first one is the 7-point scale question to evaluate how much effort it took for them to listen to the passages. And the second question is 10-point scale question, how much effort did they use to follow the conversations? And the ratings goes from no effort very little to extremely effort. For normal hearing group, the black bars, it shows similar ratings on the two listening efforts questions across the listening conditions. And for people with hearing loss, we can see that they spend more listening effort when communicating via the video conferencing platform compared to the face-to-face condition. And they spend much more listening effort there or there in the VC2 condition when following conversations without wearing hearing aids. And how about people's self-reported responses on mental workload? We use NASA task load index TRX to subjectively assess people's perceived cognitive workload while they are performing a task. It raised self-performance across six dimensions including mental demand, physical demand, temporal demand, effort, performance and frustration level. And the ratings goes from very low to very high to determining an overall workload ratings in each dimension. The results showed that in face to face condition, people wearing hearing aids reported that they expanded more mental load, workload, compared to normal hearing people. And when communicating via a VC platform, people wearing hearing aids also expanded more effort in three mental load dimensions compared to the normal hearing group. And when comparing hearing impaired groups rating with and without wearing hearing aids, people reported higher workload levels on physical demand, effort and frustration level when they were accomplishing the task. However, for normal hearing people, self-reported responses showed similar ratings on mental workload across all three testing conditions. When we asked the participants to rate their overall satisfaction of the testing sessions with audiologists and people's ratings on how easy they thought to understand the task instruction, we found that testing conditions didn't affect their self-rating overall satisfaction. And the overall acceptability ratings of the easy needs indicate that people were satisfied with both face to face and reading conferencing conditions. At the end of the appointment, we also asked two questions. Which testing section did you like better? And how likely would you be to conduct assessments via view conferencing in a future? And the majority of the respondents from the normal hearing group reported no preference for a particular condition. However, more people wearing hearing aids, they still would prefer face to face if possible. And both groups of people showed a high degree of acceptability of VC-based audiological assessment in the future. So, some take-home messages. For our first hypothesize, our funding is, yes, when communicating via video conferencing platforms, people with hearing loss communicated less effectively on the speech perception in noise and speech comprehension in noise test compared to the normal hearing people. And for normal hearing group, listening efforts appears to be similar when communicating regardless of the in the face to face of video conferencing conditions, whereas people with hearing impaired expand more listening effort and mental workload when communicating in video calls. And the second hypothesize is to explore whether hearing aids would improve the performance for hearing aids users when communicating via video calls. And the answer is yes. Hearing aids did improve the performance of people with hearing loss. However, they didn't restore speech understanding and comprehension to the same level as normal hearing or indeed to how they performed in face to face. So there's a great need to develop and to or to evaluate some novel technologies to assist people with hearing impaired in communicating more effectively in video calls. It'll be very exciting to conduct more research to improve the lives of people with hearing loss in Australia and also around the world. And thirdly, as I mentioned at the beginning, there are currently no validated tools for assessing speech communication outcomes in the context of a video calls. We don't know how feasible it is to use the standardized audiological assessment tool via the VC platform. Well, from our findings, we've observed some good test-to-test reliability in normal hearing group for the assessments that delivered via the VC platform. And this suggests that it should be possible to use the repeated testing via the VC platform such as to detect device changes over time or in response to a new technology. And we also found both the speech perception in noise and speech comprehension tasks appear to be sensitive enough to distinguish hearing aids, hearing impaired people's performance with and without using hearing aids when communicating in the video calls. So this suggests that it may be sensitive to detect the effects of other interventions that are designed to improve people's communication online. And the fourth research question is to see whether people would accept the VC-based audiological assessment? Well, although people with hearing loss showed poorer speech testing scores and higher listening efforts in video calls compared to the face-to-face, the overall satisfaction and acceptability ratings indicate that both hearing impaired group and normal hearing group were satisfied with both testing conditions with audiologists. And in general, participants showed a high degree of acceptability of VC-based audiological assessments. So this suggests that administrating audiological assessments via video conference holds potential in clinical practice. And we hope to improve the current teleology service so that it can be delivered at the same high quality as the hearing care services provided in person. So thanks for listening. And I also like to thanks for my colleagues. I hope this section gave you some answers on how well can people understand speech via video conferencing platform. If you have any questions, please contact me. Thank you.