 So good morning everybody. My name, excuse me, my name is Anne Thomas and I'm the president of the HLAA Diabla Valley chapter, and we'd like to welcome you to our meeting today. We're going to have Ed Our who's from George Washington University give us a presentation about can train can training improve understanding for listeners with hearing loss in noisy situations. Before we turn the program over to Ed, we provide directions to zoom to make sure that everybody is familiar with how to use all of the features that are available to us. These directions only apply to the desktop version and not to your iPhone or iPad if you happen to be using one. The first thing you need to be familiar with is the toolbar, and that runs across the bottom of your screen and it's black like on this slide. So the first thing we want you to know about is how to turn your microphone on and off. And it's the orange image there that points to the microphone that says mute. We'd like to ask that everybody mute themselves unless they're talking so that we don't get feedback, and how you fall familiar with that feedback it kind of goes whoa. So when you turn off all the microphones that doesn't happen. There's also an option to activate your video or turn your video off. So it's very easy to ask that everybody keep their video on unless they're doing something that they wouldn't want everybody in this in this meeting to see. So it's very easy to forget that your, your visual is showing the video showing you and at the beginning of the pandemic, a woman proceeded to take off her top, and we're sure that she didn't intend to do that. We'd like to ask you if there's something that you don't want anybody to see to please remember to turn off your video chat. The chat option is available to everyone. We'd like to ask that people feel free to comment to all of us or to their friends. We'd also like to ask that you change your name so that when you're in the chat, we can see where you're from. Just add either your chapter your location after your name and the way that you can find your how to change your name is when you drag your cursor over your image and the upper right hand corner you'll see three horizontal dots. When you click on those dots, there's an option to rename yourself. And it's also exciting for us to see where everybody's coming from and that zoom offers us the opportunity to engage with people all over the country. The most important feature for all of us, of course, is captions. So if you see there that blue, the blue button is the arrows pointing toward CC. If you click on the CC, you have you have three options. One of them is turn on the subtitles. One of them is to show the transcript. And the other one is to adjust the size of the captions. Now one of the features about showing the transcript that's really nice for those of us with hearing losses. I'm assuming that you're all similar to me. I don't realize that I didn't understand something until it has already left the screen for the three lines of captions that are showing at the bottom of the screen. And so, if I have the live transcript turned on, I can easily look over there and scroll back to see what I missed, and then come right back to the captions. Since we have a lot of people in this meeting today, we'd like to ask that everybody click on the reactions to raise their hand when it comes time to Q&A. You also have other emoji emoticons in there that you can click on to let people know their things that you like. So in the slide before I talked about the CC icon. And so this is what it looks like when you click on subtitle settings to increase the font size. So as you can see, I like my captions big, and I have them set to being as large as possible. Zoom has added a feature that's very nice for us, helps us with lip reading. You can move the captioning screen anywhere you would like on the window. And the way you do that is you just click down on the captioning window and drag it anywhere you want. So if something happens on the captioning window happens to be covering somebody's mouth, that means that we couldn't lip read them at all. Just drag it away and put it where you would like it. This is where what the raise hand feature looks like. It's the smiley face with the plus. And as you can see the first option in the pop up window is raise hand. So you click that when you would like to be called on when you're done, you click on it changes to lower hand to just lower the hand. We'd like to ask that you. Something happened there. Oh. So, actually, I'm missing a slide here. And so this is a slide where I was going to introduce it. And I have a I had a slide holder there but I must have hidden it this morning by mistake. So it's with great pleasure that we welcome Ed hour today. And Ed is a cognitive psychologist who studies multi sensory speech perception. He received his PhD from sunny and buffalo. And throughout his career he has conducted behavioral and neuroscience research at Gallaudet University, the ear house ear Institute, and the University of Kansas. He is an associate research professor at George Washington University. He's also a member at sea here, the startup that developed the online research platform for conducting testing and training. And Ed, we're going to turn it over to you. And thank you very much for coming today we're all very excited to hear what you have to say. Well, thank you. Thank you, and thank you so hard for inviting me and also thank you to the Diablo Valley chapter of HLA for hosting this. Before I get started, I'll be sharing a video that is the presentation. And I have an option to present with or without captions so if you have somebody who's going to live caption. I can present this without the captions and then you can use all the features that you just described. So I will go ahead and share the one without captions if that works. Great. Okay. All right. So without further ado, I will bring that up and introduce it. One second here. Okay. Okay. All right. So without further ado, I'll start the video we will have a pause about halfway through the video where I can answer some questions. And then at the very end, we will have a longer question and answer period. But by all means, I'll be monitoring the chat. If you have questions, please do use the chat function and I'll be able to respond to that when we have the time. All right, so I will go ahead and get this playing. It will take just a second to go from the title slide. We are all faculty members at the George Washington University in the Department of Speech Language and Hearing Sciences. Our team has collaborated to research multisensory speech communication for several decades. We conduct basic and applied research and develop custom technology for speech communication. Currently, we are translating what we have learned into tools for public use. We've conducted our research at several institutions, including Gallaudet and the House Ear Institute in California. First, we should tell you the answer to the question in our title. Can training improve understanding of noisy speech by listeners with hearing loss? The answer is yes, but maybe in a surprising way. Here is a chart showing training results from 13 adults with hearing loss from a recent study. The pink color shows their word recognition before training and the blue color shows their word recognition after training. Most of these listeners improve their word recognition after training for about 30 minutes in eight sessions. They did not train on the words that were tested. The top row shows their gains in recognizing words when the words were in high noise and they could also see a video of the person who is talking. The second row shows their results for just listening. You can see that not everyone improved, but a few improved a lot. However, you might be surprised to learn that no one received listening training. Everyone who improved on audio visual and auditory word recognition were trained only on lip reading. The bottom row shows how these adults did in lip reading before and after training and almost everyone improved. In other words, the improvements in listening and audio visual speech recognition and noise were the results of training on lip reading. Our results are good news for training and listening and noise. In contrast, recently published studies of auditory speech recognition training have not been so encouraging, but adults with acquired hearing loss are not usually good lip readers. This gives them lots of room for improvement. Another really good news is that improvements in lip reading can lead to improvements in audio visual speech recognition when you can see and hear someone. Lip reading a person better can also lead to understanding them better when they cannot be seen. We're now going to tell you some more about lip reading and audio visual speech recognition, but we thought we should first answer the question in the title of this talk. In today's presentation, we're going to tell you some things about lip reading that you may not have heard before. I will begin by defining what we mean by lip reading. Next, I will discuss how watching the talker can help you hear better in noisy situations. Then I will present some important features of lip reading training along with why we think it's important to train on lip reading. I will then let you know how you can help us out by participating in our online research studies. Our participants are thanked with a gift card at the end of the study. We will conclude by showing some demonstrations of different training approaches. We will take some breaks along the way for questions. Lip reading is the ability to understand speech by watching a person when they are talking. In the next few slides, I'm going to show you some examples of speech information available on the talker's face. I feel this is important because some people believe that not much is available from the talker's face. When talking about lip reading, we often focus on the shape of the lips, and clearly the shape of the lips is important for identification of consonants and vowels. In this slide, we show still frames that capture the talker's face at informative points when speaking four different English consonant sounds. When speaking, we know that the lips press together to make the B consonant gesture. In contrast, the talker touches the lower lip to the bottom of the upper teeth in order to produce the consonant F. However, when we look at these different consonants, we see that the term lip reading is really a misnomer. It is not just the lip shapes that are important. Skilled lip readers glimpse into the talker's mouth to see their tongue movements along with watching the rest of the talker's face, including their cheeks and their jaw. For example, as you can see at the image on the far right, the talker places the tip of his tongue through his teeth to produce the TH sound as in thin. In this slide, we see the difference between the vowels uh as in hut and a as in hey. Both vowels have similar lip shapes as you can see, but the tongue is in a different position, which can be seen by looking into the mouth. They also differ in the puffiness in the cheeks, which can be seen by looking at the regions around the mouth. You can feel the difference in your own mouth by alternating between uh and a repeatedly. Those differences that you can feel are also visible and used by the skilled lip reader. In addition to what can be seen in still frames, skilled lip readers use lip, jaw, cheek and tongue movements to understand speech. Here, I'd like you to focus on lip, jaw and cheek movement when watching a talker say the syllable DA repeatedly. On one side is the full video and on the other side, we just see the dots that were on the face while he was talking. If we take away the talker's face and just watch the dots move, we can see that he is talking and we can see that the lips, jaw and cheeks are moving. For some, this motion can even provide an indication of what he was saying. As you're beginning to see, the talker's face can convey a wealth of speech information, but this raises the question of just how accurately can individuals lip-read, and how much do they vary in that ability? We've done some research to investigate that. We have tested the lip-reading ability of several hundreds of both deaf and hearing adults. These tests are in the form of watching silent videos of a talker saying a sentence and then a participant typing in what they thought the talker said. The sentences are commenting those sentences and are presented without any context. Here, we see the results of one such study. The figure displays what is referred to as a box plot, and we use this type of plot because it shows the whole distribution of results in one figure. One feature of both the hearing and deaf participants' performance is just how much variation there is in the lip-reading ability. The deaf range between zero and about 82% words correct. The hearing range from zero to roughly 65% words correct. Another interesting feature of this figure is that deaf participants tend to be better lip-readers than the hearing participants. A main take-home from this graph in relation to what I've just been saying about lip-reading is that the high levels achieved by the best deaf participants demonstrate just how informative the talking face can be. Furthermore, the relatively low performance of the hearing individuals suggests that for many, there is a lot of room for improvement given appropriate training is given. We now turn our attention to the question of how does lip-reading help when listening in noise. We know that for many people it is rare to rely solely on lip-reading to understand the talker. Frequently, individuals will be able to see and hear the talker. Importantly, when you watch and listen to a talker, the combination can make for highly intelligible speech. Put simply, the combined information is far greater than the sum of its parts. In considering the benefits of lip-reading, we are thinking about how a hearing aid or cochlear implant can work with lip-reading to improve communication. Here, I'm using a visual example to explain how listening and looking work together. This first square filled with colorful noise represents listening in an extremely noisy environment, without a hearing aid or implant. If you look closely, you can make out that there is the word speech in the middle of the box. However, it is quite challenging and you might not even be able to see it. The second square represents listening with the benefit of a hearing aid or implant in that same noisy situation. In the best circumstances, the increase in audibility from the hearing aid and any benefits from directional microphones make it easier to understand speech. However, there is a lot of room to increase the intelligibility and the ease of recognition of speech. The third square represents listening to speech with a hearing aid combined with lip-reading. Speech is far easier to understand and it separates from the surrounding noise. You might ask if you have a really good hearing aid or implant, why is lip-reading important? And I'll have the audiologist on our team address these limitations of hearing aids and implants. Hi, I'm Dr. Nicole Jordan. I'm the clinical coordinator of audiology at the GW Speech and Hearing Center. Let's talk about why speech-reading and lip-reading is important for those that have hearing aids. Hearing aids have distinct limitations. First off, they're aids. They're designed to aid and help you hear, but a hearing aid is not going to be able to help you hear what a normal hearing individual hears. It does not equate to normal hearing. So hearing aids absolutely have limitations. When audiologists fit your hearing loss prescription, what's unique to you and your ears and your hearing, many times they fit the hearing aids to optimize audibility while still optimizing comfort. In doing so, many times, treble tones, high pitches, and high frequency sounds are decreased in order to optimize comfort and naturalizations of sounds. In doing this, while they make the hearing aids a lot more comfortable to you, they might decrease clarity to some of the softer, higher-pitched sounds like S, TH, SH, P, and F. In terms of the individual, the more hearing loss an individual has, the more difficult it is for an audiologist to be able to maximize that audibility, comfort, and sound naturalization. The softest sounds that you're able to hear are now louder than what they usually are with normal hearing. So in cases of moderately severe, severe, or profound hearing loss, there is a higher risk of not hearing certain speech sounds or sometimes those speech sounds sounding more distorted than they would if someone were to have natural hearing. Finally, with hearing aids, although technology has come such a long way, the microphone adjustments that are made automatically within the hearing aid and the noise reduction in the hearing aid, the actual processing chip itself, can only do so much. So a lot more needs to be done by the listener in terms of attention, focus, filling in gaps, or in the case of many individuals, filling in those gaps with lip-reading and being able to see the speech to better hear the speech. Well, cochlear implants are amazing technology and help adults with severe to profound hearing loss. And in many cases now have been brought in to help those that have high frequency hearing loss or even single-sided deafness. They also have limitations. While many individuals with cochlear implants have thresholds or they're able to hear soft, soft, soft sounds consistent with normal hearing, having a CI or cochlear implant does not equate to having normal hearing either. A cochlear implant provides that audibility, but so much more has to be done in terms of brain hearing, listening, and communicating. While individuals that have cochlear implants have more access to those speech sounds than they did when they used a hearing aid, the improvement in terms of noise filters and also directionality with microphones may also not be enough for successful communication when there's lots of background noise present. In situations like that, being able to see the speech really helps an individual in those listening situations. If you have a cochlear implant only in one ear, you're still going to be missing important timing cues and loudness cues that are important for successful hearing and background noise. We are designed to hear with two ears and a brain, so those ears aren't working the way they should be working and even on both sides. Sometimes it's going to be really difficult to hear noisy situations. In these cases, being able to see the speech rather than just hear it makes a big difference. So if you're a hearing aid user or you're a cochlear implant user, how much of a benefit can you get from lip reading? The noise suppression of hearing aids and cochlear implants again has come such a long way, but is still less effective than what your eyes can provide you in terms of being able to see the speech. In this first option, you can see that the speech is still fairly faded. You can see the word speech, but it's not clear. This is a great visual of how those noise depressions and hearing devices can help you hear a little bit better in background noise, but aren't perfect. The big clarity comes from being able to see that speech while using your hearing devices at the same time. You can see in the second option, you're getting much more clear speech reading. In terms of the difference, digital noise reduction is able to increase your ability to hear by three to five decibels, whereas when you add the lip reading in, it increases it significantly more, up to 12 to 20 decibels. How might this help you improve speech understanding and background noise? I love this graph here. This is what we call the speech banana. What it is is a superimposed picture of where speech sounds lie over an audiogram. So we've got low pitches here, all the way up to high pitches, and the further down the graph you can get, the more hearing loss exists. Some of the softer, higher pitch sounds or treble sounds are going to be here with the FTHS. Some softer sounds like P and K are also going to be difficult to hear for those that might have high frequency hearing loss. When you add background noise, that background noise tends to be so much more powerful than the soft high pitch speech sounds. In audiology, we often say that low pitches, which equates to background noise, provides the power in speech, whereas some of the high pitch sounds or treble sounds add more clarity. So if you've got a lot of power and background noise that is making it difficult to hear the clarity, you're really not going to understand the message very well at all. So again, in most situations, even with normal hearing individuals, when background noise is present, those higher pitch sounds like FTH and S may be hard to distinguish by ear. Thank you Dr. Jordan. So as we saw earlier, by examining the tongue, teeth, and lips in these still images, it's easy to see the difference between fa, fa, and sa. If these differences are hard to hear, they're actually pretty easy to see. This distinctiveness is even easier to see when we watch the video of the talker saying fa, fa, and sa. We start with a full face, full speed video, and then focus in on the mouth region, slowed to half speed for demonstration purposes. Our brains are designed to combine listening and lip reading to make speech more understandable in noise. To give you a feel for how much you might already benefit from viewing the talker, we will do a little demonstration. We will play a single sentence spoken by a female talker, but to make it more challenging, the sentence will be mixed together with some background babble. I will play the example twice to give you a chance to listen to it and try and understand what she is saying. Although the talker's face is shown on the screen, it will not move. Is everybody ready to listen and try to understand the sentence? Here we go. Yes, she was shot five times. And one more time. Yes, she was shot five times. Okay, that was pretty challenging. So let's go on and we'll try to see what we can do when we can see the talker's face moving. Now I will play the exact same audio clip, except this time you'll also see the talker's face move as she speaks the sentence. Is everybody ready to listen it and watch? Here we go. Yes, she was shot five times. Okay, I'll play it one more time. Yes, she was shot five times. The talker said, do you have change for a $5 bill? Hopefully this demonstration gives you a feel for how much easier it is to understand speech when you can also see the talker. But if watching didn't help, it may be that you need to improve your lip reading because better lip readers can do this task better than poor lip readers. Okay, I'll just pause there for a second and open the floor for any questions if you want to type in and chat. Or if you want to raise your hand, one of the other hosts could possibly call in people. Carolyn, would you like to unmute yourself? Thank you. Carolyn Odeo. My question is, how do I practice this on my own? Oh, for instance, watching it, I guess it would have to be a talk show. Would it be better to watch it with low volume or regular volume or no volume in order to improve? I don't know how to do this. So that's a great question. So part of what we're doing here today is we're seeking volunteers to be part of our training studies that give you access to a training protocol that we've developed. There are other online systems that provide you with practice. Some of those are paid, some of those are free. If you were to want to practice on your own, the challenge here is, as somebody who's had normal hearing for their childhood and into early adulthood, typically we tend to rely very heavily on the audio or the auditory speech signal. And so if you were to practice while listening to a talk show, for example, having the volume up, you would probably tend to focus more on the audio and you wouldn't necessarily benefit from the training. So some of our training studies have shown that, for example, if we combine even degraded audio with the video, the individuals who are training tend not to pay very much attention to the video at that point. The converse is true for a early deafened individual who grew up paying attention to the visual aspects of the signal. If they get an implant later in life, and we try and train them with audio visual, they tend to focus on the visual and ignore the audio. So there's a challenge there. So part of what we're doing, and you'll see as we move on later in the talk, is we're training lip reading by itself. So the focus here is what can you get out of the signal visual only. And the reason that we're doing that is to keep you from falling back on that crutch of the audio. It's not because we think that in the end you're not going to use the audio. But we know that we need to strengthen those skills that are associated with reading, and the bad do that is to focus entirely on lip reading. Another issue that will come up, I'm sure, is a reliance on context. We believe context is very important when you're lip reading, when you're listening in noise. However, our training does not use context. And in part it's because we are really focused on developing the fundamental skills of lip reading. So that once you have those skills, they will automatically come in and hopefully integrate with the auditory signal that you're already well practiced trying to perceive. And as well as auditory context, there is the semantic context or knowing what the conversation is about. And all of those things come back in, and you're well practiced at doing those. It's the lip reading that tends not to be well practiced. So I'm just looking back through some of the questions. The other question is in the graph at the beginning, where the lip reading values, are the same. So I'm not exactly sure what the question is asking. The very early graph at the very beginning was showing you raw data from multiple participants. And lip reading is all over the place in terms of ability. So we range in our hearing impaired adults and our normal hearing individuals anywhere from getting no words correct up to maybe 50 or at best 60% words correct. And that contrasts pretty starkly with individuals who were born deaf or became deaf at a very early age. Those individuals also can be very poor liberators so I don't want to leave you with the impression that just being deaf makes you a good liberator that is not true. As a population, when we look at the difference, about three quarters of that population performs above three quarters of the population of normal hearing individuals. So and by normal hearing, I'm talking about individuals who developed through childhood with normal hearing and focused on auditory speech. So, we consistently over the past 20 to 25 years that we've been studying this, see an extremely wide range of performance in lip reading in individuals with and without hearing loss. And that that's one of the fundamental questions that we've been trying to understand the what it is that leads to that. And one of the things that we see consistently is early and focused exposure to lip reading. But remember what what those individuals are doing in those tests are seeing a sentence without context, only seeing it once and typing in what they thought the person said that without any audio input that's that is a very specialized ability that takes years of training to achieve. What we're talking about in helping you to improve your speech and noise perception is a much shorter term process that is looking to just improve your lip reading ability from where it is to a get you a little more accurate. Be get you a little bit more comfortable with it so that you're not expending quite the same level of effort. So you're not quite as exhausted at the end of a conversation, because you've got a little bit more skill in decoding the visual speech signal. So, there is also a question about lip reading. In terms of accented speech. So if somebody has a dialect of English or an accent accent in English, it can be challenging. However, there's still information to be extracted, provided the individual is moving their lips. And moving their mouth. There are great individual differences in talkers as well. So it's not just the perceiver the person who's trying to understand speech. It's also the talker that matters. And you have to be able to see the face moving, see the articulators, the speech articulators moving. And the person has to be actually moving in a normal sort of process. So we don't want somebody to what we call hyper articulate where they accentuate everything. And in those situations, it becomes more difficult to lip read. So you want a natural flow of speech. Okay, are there other questions before I continue on with discussing the training section. So Ed, I see a question there. Rich Osborne asked if you have any recommendations for lip reading classes. I personally don't have a particular lip reading class that that I favor. I think that what I can do is I can share at the end of after the session, I can share programs that I know of. That there's a Canadian system that I know that they've given a talk through HLA that is looking to give you lessons that are online. There are a few other programs lip reading dot org has lip reading classes. And obviously this is a push for trying to get people to participate in our lip reading training. My view on this is the more sources of information that you can be exposed to the better. We're focused on one aspect of lip reading training that I'll discuss here and I'll be happy to answer more questions about why we're focusing on that after we've gone a little further in the presentation. But for some individuals they need more instruction is what I would say didactic instruction where you're being instructed this is the difference between these consonants. This is specifically what to look for my feeling my personal feeling about that from my own research is that that's helpful to an extent. And as you move on and try and use this in real life in real time. It's much better to gain the skills that you practice through an experiential type training, like we're offering, because quite frankly you don't end up having the time to do the amount of thought process that it would take to load the signal that you don't want to be thinking about it too much is a quick way of saying it, and I'd be happy to share some of my experiences as a learner, not only of lip reading but of things like understanding that have informed this need to focus on the low level perceptual skills and having that more automatic than it typically is and why that's an advantage. So I'll be happy to return to that point at the end. Other questions. Okay, why don't I go ahead and finish out the video and then we can return to question and answer at the end, and that way I can be the more of a focus on the screen than just a small box. So here you go. Another way to look at the benefit of lip reading and listening and noise is to look at some results taken from our ongoing studies. We start with the data from an 80 year old hearing aid user. We are showing the results for word recognition in sentences. In this activity, sentences are presented one at a time and the participant types in what they thought was said. All of the audio is presented in combination with noise that is similar to white noise or loud rushing air. Looking at the pink bar, we see this participant is really struggling when listening and noise. He can only hear about 5% of the words correctly. Looking at the blue bar, he's also not a very good lip reader. He is more like the typical hearing participants we showed in the box plot earlier. He can lip read about 13% of the words correctly. However, when he was able to see and hear the talker saying these sentences, his accuracy was greatly improved going up to 41% words correct. This is a good example of the benefit of being able to see and hear the talker. You can see the performance in the listening and lip reading condition is substantially higher than would be expected by adding the listening alone and the lip reading alone performance together. In this figure, we show the performance on the same task for a participant who listens to a cochlear implant. Looking at the pink bar, this individual was doing much better when listening and noise. They were able to recognize 41% of the words correctly. They were also somewhat better at lip reading, but still only recognized 22% of the words correctly when lip reading. Importantly, we see that this participant was much better at understanding speech when they could see and hear the talker. They recognized 65% of the words in sentences correctly when they could both see and hear the talker. Finally, we show here a similar pattern of results holds for a participant with normal hearing. The pink bar shows that they were able to recognize 28% of the words correctly when listening alone. Listening and noise is clearly a challenging task even for individuals with normal hearing. The blue bar indicates that they were typical hearing lip readers getting about 19% of the words correct. Looking at the green bar, this participant really benefited from seeing and hearing the talker. They were able to recognize 68% of the words correctly. This level of performance is particularly impressive given that they were not very accurate when they could only look or listen. It's important to note that we purposely make the conditions difficult in our testing. We would expect that if the noise levels were not as challenging, then these participants would have easily reached close to or equal to 100% words correct when able to see and hear the talker. To sum up so far, we have seen that lip reading contributes to distinguishing the consonants and vowels and that this helps us to recognize spoken words. Lip reading can improve our ability to understand speech beyond using a hearing aid and or cochlear implant. Lip reading can greatly improve understanding speech when hearing speech is difficult in noisy social settings. As we turn to the discussion of training to improve your lip reading ability, an important question to ask is what is it that makes a good lip reader? The answer to this question is a key message that you should take away from today's presentation. The answer that we've arrived at after years of research studying expert lip readers is that they have the ability to recognize a large number of words without knowing the context. So the key is good lip read word recognition. We know that this is not an easy task to achieve and getting better will take some time and effort on your part. We also are not discounting the importance of context. Of course context is important, but to get really good at lip reading, you have to have fast and accurate word recognition skill based on watching the talker. Think about recognizing words when you're listening in a good setting. You're not thinking about the consonants and vowels and how they match the sounds that you're hearing. You're just hearing the words. Skilled lip readers aren't thinking about the mouth shapes and tongue positions. They're just seeing the words. We don't expect you to become an expert lip reader with this training. However, we know that the better you are at lip reading, the larger the benefit you get when watching and listening. The research shows that poor lip readers get less benefit than better lip readers when speech is noisy. Even small improvements in lip reading will result in a benefit to you. So how do we get there? Even though most adults who grew up with normal hearing are not particularly good at lip reading, our evidence suggests that most adults do have some ability to lip read. In examining our participants responses, we frequently encounter evidence that even our less skilled participants are perceiving the information on the talker's face. But they're struggling to recognize words using that information. Oftentimes, this comes in the form of humorous responses. In this example, the talker said, proof read your final results. Our participant responded with blue fish or funny. While this may seem nonsensical and unrelated at first, on closer examination, this response is quite reasonable. The visible production of proof is well matched with the response of blue fish. The production of final is well matched with the response of funny. So this type of response provides evidence that the participant was responding to what they saw the talker produce, even though they actually responded with no correct words. This suggests that what they need is feedback on their errors. Before we describe what successful training should look like, we begin by discussing what it shouldn't look like. What does unsuccessful training do? It makes you good at naming consonants and vowels in isolation, but no better at comprehending words alone or in sentences. Learning to lip read is not about learning to explicitly match the talker's gestures to the names of the consonants and vowels. This is not what skilled lip readers do, and furthermore, it's not what we do when we are listening to speech. Unsuccessful training also will improve your lip reading of only the materials that you train with. This type of training at best is tuning you to the materials you train on and does not extend beyond those materials. This does not lead to real-world benefits. Unsuccessful training overemphasizes guessing. While guessing can have a role to play, the primary emphasis should be to decode what the talker is saying in the context of word recognition. Unsuccessful training also overemphasizes the reliance on using context. While context is important, what the talker is saying is where the information is. Understanding some of their words can greatly increase the context that you have. So what characteristics does useful training have? It improves lip reading words and talkers that are different from the training materials, so it generalizes. It corrects vowel and consonant errors, so training should provide feedback from more than what you correctly recognize. The brain makes use of the feedback on errors to correct performance. If feedback is only for what you correctly perceive, the brain does not have what it needs to improve and learn. Think about learning any difficult skill. It's nice to know when you get things right, but where you really need the feedback is where you're having a hard time and you're struggling the most. Finally, it improves audio-visual speech recognition and noise, so it makes it easier when you're listening and looking at the same time. We have developed a variety of training protocols based on our laboratory research and the research of others. We have observed 10 to 20 percentage point increases in word recognition with roughly 5 to 6 hours of training spread over roughly 10 sessions. In part, thanks to COVID, we are now testing these training protocols to measure their strengths and weaknesses when delivered via the web. Our system uses sophisticated software to analyze the errors that are made during lip reading and to provide useful feedback to practice and improve word recognition skills. In a moment, we will be demonstrating three of our training protocols that we are testing. Each of those demo videos lasts about two minutes, and afterwards we'll open it up for questions that you might have. Before we do that, we could really use your help by participating in our research studies. These studies are completely remote, and there's no need to come into the lab. We are currently seeking individuals between 18 and 85 with and without hearing loss. You must have primarily heard, spoken English during the first 10 years of your life, have not had major concussions or other brain injuries, have normal or corrected to normal vision. If you're interested in participating, or you know somebody who might be, please have them contact us by going to seehere.us, which is our research portal. You can also email me at ehour at gwu.edu. Now we turn on to the demos. This first video shows a training approach that focuses on word recognition in isolation. This is a how-to video for lip-reading training. It shows all of the activities you will do with every training word. You will be training with 60 real words in each session, and the words will vary in difficulty. Each word will be introduced by a target consonant or consonants. And to begin the training, you will click on that consonant. You will then see two words spoken, and you must pick the word containing the target consonant. Select one for the first word, or two for the second word. If you're correct, you will then lip-read the word and type in what you think it is. Let's try an example. The target consonant is K. I'll click on the K, and then watch the videos. I think the target consonant is in the second word, so I press the second button, and I'm right. Now I watch the word again to identify it. I think it's Lake. So I type in Lake, and I'm right. Okay, let's try another one. Now the target's G. I think it's in the second word, and I'm wrong. So let's try it again with G. I'll try the second word again, and I'm right. Now to identify it. I think it's wagging. I was close, but not quite correct, so I get to see the consonants and try again. This time I type in wagon, and it looks like I'm right. So now I can go on. You'll do this activity for 60 words in each session. The second video shows a training approach that focuses on word recognition in sentences. This is a how-to video for lip-reading training. It demonstrates the sentence training activity. In this activity, a talker will speak unrelated standard English sentences, and you will try to lip-read those sentences. For each video, you will watch the sentence, and then type in what you thought was said. The software will score your response, and then display the words that you lip-read correctly. It will also display the consonants in words that were close. Correct words are printed on green, and correct consonants that were in close words are printed on amber. You will then have a second chance to lip-read the sentence and type in your response. The software will again score your response and display how you did. The software will also print the correct answer and show you the video one more time. During this activity, please focus on the lower part of the talker's face. Also, please type words or parts of words that you think may be correct. The software needs your response to provide you with feedback. Let's try an example by pressing the button. I thought he said that was cool last night, so that's what I'll type in. That and last were close, so I only get to see the consonants. Cool and night were correct, so they're in green. With that feedback in mind, let's watch again and see if we can do a little bit better. It gets cool at night, and that's correct. Now we get to see the sentence typed out and watch the video one more time. This is how the activity will go for the rest of the sentences in today's session. The third video also shows a training approach that focuses on word recognition, but now we're using nonsense words like yotted, and we're emphasizing visual memory. This is a how-to video for liberating training. It shows all of the activities you will do with every training word. Look at the printed letters. Every word has two D's and one other consonant. The vowels are always a and i, and they are never printed. Here's an example. Look at the printed letters and say the word yotted to yourself. Remember yotted while we work through this example. I will click on yotted to start the activities. My goal is always to match the first video word to either the second or third video. Now I have to decide whether the second or third word matches the first word yotted. I chose word two, I was wrong, so I get to try one more time. I have to pay attention because the correct word may be word two or word three on the second try. Look at the printed letters and say yotted to yourself. Remember yotted. I think the matching word is word two. Now I'm correct. Lastly, a different talker will say a word and I will have to decide if she said yotted. I think it was the same, so I will click on same. Looks like I was correct. Let's go through one more example. This time the word is doshid. I thought it was two and I was correct. I thought it was the same and I was correct. You will do this activity with 80 different nonsense words in each training session. To recap, lip reading is important because it compensates for hearing difficulties in noise and in quiet. The better a lip reader you are, the more lip reading can benefit you, whether speech is in quiet or in noise. Training can improve lip reading, but not all methods work. The training needs to give you feedback to help you improve your perception of constants and vowels in the process of spoken word recognition. Before I open it up for questions, I'd just like to acknowledge the National Institutes of Health for supporting our research. Thank you for your attention. Okay, so I'll stop the share there, obviously. And open it up for questions. I see Dale has a question. Dale, can you unmute yourself? Thank you. You're still muted. I don't have a question. I don't know if I, I didn't mean to press the hand. Okay. Not a worry. So one thing that came up. Well, there were two things that came up in the chat while the second part of the video was playing that I'd like to address. The first is has the persistence of this training been studied. These studies currently are seeking to establish that there's actually benefit outside of the laboratory. So some of these techniques have been tested out in our laboratory where we have people come in, do the training under very controlled conditions. And we've shown benefit for from pre to post testing. That has not been established outside of the laboratory for these techniques. Currently the I can't really say at this point because it's an ongoing study. The purpose of what we're doing here is trying to get people to enroll to test out whether these techniques work outside of the laboratory. I mentioned also the testing in part because of COVID has advanced in terms of its offering outside of the laboratory. So some of the techniques that were currently testing were intended to be tested in the lab pre COVID. So as COVID hit, we were in the process of recruiting people for in laboratory testing. Obviously that didn't that wasn't going to work out. So we moved several of our protocols on to the web platform and see here dot us and our testing those and we're in the process of collecting the data for all of these protocols. And we'll be collecting data throughout the summer. At this point. To answer the questions about persistence on the challenge has been for lip reading training where we have rigorous studies in the lab. It has been to show improvement. The next step is after showing improvement is testing follow up several months, even years down the road to see whether the benefit persists. That's a great question. It's one I'd like to answer and will be focused on answering that question as we get evidence in. So, so one thing that that we would like to emphasize here is that we would like to make these things commercial training program at at some point down the road. But what we're focused on currently and what we're funded to focus on is to establish that they actually have benefit. And we wouldn't allow them out of our lab if they didn't have benefit. That's a great point in doing it otherwise. Fortunately, in the lab, we've definitely seen improvement. And to the extent that I've seen results. Some of these same protocols are showing benefit outside under very uncontrolled conditions. So I should mention also that if you participate in these studies. Many of you are in the Bay Area where Internet is Internet speed is not an issue. Typically, the, you have to have a good enough connection to download the videos as they are needed, and a good enough computer to to run the software. That typically is if you're able to stream movies, you've got more than enough capacity to do this. The other question that I saw was from somebody who teaches live lip reading classes. And I want to make sure that it's clear that we currently see what we're doing as developing tools that will supplement that type of training. So there are certain things that are great to do live and can give you experience and instruction and pointers on how to lip read in a live class situation. And what we're developing here is a tool that's more like what I believe it was Carolyn asked about earlier. Can you practice this. And we want to have a tool or set of suite of tools that you can practice and that helps you to improve giving you feedback in a way that would be very challenging to do in a live class. And part because you're typing in what you thought the person said that that's you're making very explicit what it is that you thought was being said, we know what was said, and we're able to automatically map between what you typed, and what was said, and where you're making errors. And the types of errors that we're looking at, as I alluded to in the proofread your results, your final results versus blue fish are funny are below the word level at the speech sound level so we can say, if you have a period of testing. If you have particular difficulty with certain consonants, for example, or if you're not getting where the word boundaries are or something like that. Once we have this automated tool that we show is successful. And it's about information to help guiding where you need more help. And some of that may be improved by explicit instruction. Hands on, you know, our face to face. And some of it may be through automated techniques that we're developing currently. So that those are the types of things that that we're focused on. And I do want to say that that we see this as a complimentary technique. To live instruction. So, I see that there's a question about participating. So, the criteria that are set out by our funding agency which is NIH, and also our agreement in terms of human subjects. And I think that our university which oversees our research. Put certain limits on who can and can't participate. That said, if this is something that you're interested in. And you don't think that you qualify. Please email me and get in touch, because we also have some same training techniques that we're making available to people so that we can try it out with different populations and get feedback from you. And it wouldn't be as extensive in terms of a pre test and a post test. But you'd get all the same training that we're offering our participants and I'd be happy to meet with people one on one to get you set up doing this. The other thing that I want to mention is that the testing if you enroll as a participant. There are two to three sessions where we just do testing of your abilities. You go through a training protocol, and then we get back together and we do another session via zoom and I'm pretty much the person who's handling all of those so I'm happy to have a conversation with anybody about how best to improve this or how best to get involved. I see that there are a couple other hands up. If one of the moderators wants to jump in since you might see more than I do. You're next. Please ask your question and mute yourself. Okay, so I'm the person who's going to be loading this up to our YouTube channel. And since I have research information about asking for volunteers, I would like to know how far along. Are you going to be accepting people for the research. So we will be accepting people through, I would say, August to September. So I would put that put September as a limit on on that. And if people still want to contact me, we'll see where we are in funding situation at that point. But that's the current plan. Thank you. Alan, do you have any other question. No, that's all. Thanks. So the next person is Jerry, think about her. Thank you very much. My question is going through the examples of sessions with you. And on screen. I know there were one section was like 60 responses or questions the other was like 80 and so on. I'll go over a little bit more in detail as far as time commitment. Yeah, the sessions and how long those sessions go on, please. Right. Absolutely. So there are, as I mentioned, both studies that we're running and there are several protocols within those studies, but they all have two sessions to start. And those tend to be about an hour. It really depends on how much you're responding to the stimuli and so how much you're typing in and how quick of a type as you are. So those range anywhere from 30 minutes for a very fast type or who's not responding a lot to maybe an hour and 15 minutes. All of our protocols have the ability to pick up where you left off. So if you ever got to a point where you just couldn't continue or you didn't want to continue, we could restart that and we'd pick up where we left off. So that's not a problem. The final testing is also about an hour long. So those are Zoom sessions with me where I'm available the entire time for questions. The training, you picked up correctly that there are different numbers of materials that are being presented. We've tried to time these out so that they're between 30 minutes and an hour max. And we're making an educated guess based on pilot studies for how long it takes somebody to go through these. Again, it really depends on how much you're focused on the materials. So if you're taking a long time to really digest the information, it could take you a longer time, maybe the full hour. If you're going through and you're actually pretty good at this, you could go through a session in the easily a half hour. And there are in one study, there are 10 sessions. And in the other study, there are eight sessions of training. And like I said, they're range between 30 minutes and an hour, depending on how much focus you're putting on them. So typically our participants are running in the study between their in contact with us, I guess is the best way to say for roughly three to four weeks from start to finish. And part of that is we do the initial session, we get you enrolled, we do some lip reading testing. And then once you're done with that, we send out a sound level meter and copy the consent form and some other paperwork. But primarily we're getting a sound level meter out to you that is yours to keep as part of the study that we use to calibrate the output of your computer, so that we know how loud things are being presented to you. We absolutely don't want to present anything too loud, but we also want to make sure it's presented at a level where you can hear it. So we send out the sound level meter that takes two days to a week, roughly, depending on mail and things like that. So that puts a big bump at the beginning. And then, as soon as you're done with that second session, you can do one of the training sessions per day. So that's either eight days or 10 days. Now, you don't have to do them consecutively in days. We're not actually measuring that time lag so some people are taking eight days to finish. Some people are taking a month to finish because they had a vacation planned or something like that and and had to travel. So it's really up to you in terms of that. But that gives you a ballpark. So if you have questions, more specific questions, I'm happy to try and hone that down. Well, I appreciate that information. It is helpful. I have to admit that I didn't do too well on those sample questions that you were helping. And that's okay. That's okay. Wow, the room for improvement is understatement, hopefully. Yeah, what is the success rate that you've been finding with people, because I could see people getting very frustrated. Yeah, yeah, doing this and you know not sticking with it, but anything in life if you stick with it, you get better. So what encouragement do you have for us that are down at the bottom of the chart when we did that practice study? Right. So I mean, not everybody improves. I can't, I'm not going to sugarcoat it that everybody improves. But many of the people that we have tested or have gone through the training get better, at least on audio visual. So getting better on visual is a challenge. And it's taken me personally years to get better. But I do see benefits from my own experience. We are seeing improvements in a lot of the people in audio visual. That is not uniform across conditions. And that's part of the study. I will say that if we see, if we were to see, for example, that a condition was not improving at all, that it was actually making things worse even. But we need to be able to pull that from the study. And so we're monitoring for that sort of thing. But we need to be able to see a difference. And fortunately, you know, if you start out at the lowest level. What I usually do in setting up and talking to somebody at the beginning of the study is get a feel for where you're at. And if you feel like you have no idea, you're not going to type in any responses because when somebody talks and you're lip reading only, you have no idea what they're saying. I would probably guide you to one of the studies that has a little bit more structure in it. And it gets you off the ground in terms of being able to do a task that you can succeed at. So some of those tests. And I realize those examples go by very quickly. But for example, where you're seeing a letter and having to choose between a couple words, I'm designing those tasks so that they're doable. I know the trials are going to be really hard to get the right answer, but they're designed so that it is achievable to reduce some of that level of frustration. So, you know, in terms of what we've seen, I can only speak to the results that I've had presented back, which is a portion of them so we purposefully are keeping it separate so that I'm running the study. I'm not actually looking at the data. I'm somewhat blind to that. But I do get feedback that, you know, we're seeing improvements, you know, in terms of lip reading ability and mostly on audio visual. That helps. But where I'm certainly sensitive to the fact that that if you come in and you have no idea whatsoever, or are the type of person who doesn't want to respond. We'll make sure you get into a study that has a little more support for you. I appreciate that Ben and your encouragement because then there could be hope for me to. Oh, absolutely. There's definitely the ability to improve your lip reading ability so as I said in the talk. You can have all of the words wrong and still be showing evidence that you were actually paying attention and pulling information out. So the key there is that we're seeking to take advantage of that information that you're getting and improve upon it. And I can tell you from computational studies where we model performance that you don't have to become a great lip reader to leverage a benefit from lip reading while listening. It's just we want to get you a little bit better and we know that the best way to get you a little bit better is to focus on the lip reading. And it's it's a challenge though. So, thank you. Yep. So, I, before we go to another question, I want to address the question that just came up about the hearing loop or Bluetooth. The, the problem that we have is we need to be able for our protocol that NIH approved. We need to be able to calibrate the output levels of your computer. If you go through an induction loop or if you go through Bluetooth, we have no way of doing that. That is makes it impossible for us to control levels. So we do require for those portions of the test that you're going through the air, because we can use a sound level meter to test that. So if you're participating in the study, those testing portions need to be done through the air, through speakers. And it's entirely because it's the way the protocol was set up. Again, in our lab, we had planned to do this testing all through calibrated speakers where we knew exactly what and how loud it was being presented. We've moved to this approach that NIH signed off on where we're testing outside of the lab through your speakers. But with the provision that we do some testing with the sound level meter and also the sound. Once we've calibrated with that, your responses to test what you're able to hear through your system. So that answers basically how that would work. And again, if you're not comfortable working without the loop or Bluetooth, which I understand that that's quite possible. Please do get in touch with me. We can still make the training available to you so that you can try it out without having to go through that testing. And that way at least you get some exposure to it. And we get your thoughts on whether it's beneficial to you. We honestly, we wouldn't be able to use that type of information for publications. But we want to keep a very fine line or clear distinction, not a fine line, a very clear line between what we treat as research and what we treat as just information and feedback on what thoughts are on the system. Other questions. If you have your hand up. Are you still here. Okay, so I'm, I'm Erlene from the HLA chapter, what come county Bellingham, Washington. I just want you to please repeat your, your site to contact you on. Sure, I will type it into chat. You are you are I missed. Just a second. So, Erlene open up the chat. And you can scroll through the chat and see it's email address. Yeah, well, I'm a bit better here. Okay, I get it. So it's the same as Eddie Bauer except without the B. Yeah, I got the. much. That's all I want to know. Thank you so much. Okay. I'm also typing in, um, this is the web address. See here dot us. That is our research portal, um, that we're using for doing all of our testing. Um, so in, in terms of full disclosure, I do a lot of the web programming as well. So, that is our primary site, um, for testing. And you can get information on both studies if you go there. Ed, we'd really like to thank you for giving us this very, um, encouraging presentation today. And maybe even more than that, but that you're directing your life energy to create tools that can help us function better in society. And I'm sure I can speak to everybody who's here today and expressing our gratitude for that. Well, thank you. We, we can't do, uh, this work without the help of people like you getting the message out. So we really appreciate it. And I'm thinking that you're going to get a lot of response from people who are here. Um, Alan and I will also put together a flyer advertising that you're looking for people to participate in a clinical trial, in the clinical trial of this. Yep. Do you have a flyer? I have a flyer that I'll share with you. So I will send it to me. Yep. I'll send you an email with it. Yep. So before we let Ed go, does anybody have another question for him? Okay, Ed, you're, you're, uh, we'll, you're welcome to stay. We have a few other things that we're going to engage in here. We'd like to remind everybody that to save the date for the walk for hearing, it's, it's just right around the corner. It's less than a month away. And it's going to be in Alameda and it starts at nine o'clock in the morning. And Alan and I have started to send out some requests for donations and reminders about the walk. So we want everybody to step up. You know, if you think that it's important, the support that you've gotten from the Diablo Valley chapter from HLAA, this is your opportunity to express that gratitude by showing up and walking at the walk and also to make a donation. Alan could, sir, created a fabulous QR code for us to make it just super easy for you to donate. If you go to our website, the QR button is there on all of the publicity pieces you're going to get. And all you have to do is scan the QR code with your phone and it'll take you right to the donation page. If you're uncomfortable doing that, the, all of the donate now buttons that you see when you click on them, they'll also take you there. And can you donate by check? Yes. So there's some other options other than a straight donation. You could create your own team. You could register as an individual. And if you need help with either of those two things, please contact Alan, could sir, and he's more than happy to help you. So we want you to get ready, set and come walk with us. So the upcoming events that we have are June 4th, the walk for hearing in Alameda. And it turns out that the last walk for hearing in Northern California that was held in person was in 2010. And then we didn't hold walks and we were scheduled to hold a walk the first year of the pandemic. So we didn't have one that year, didn't have one the next year. So this would be the first time in a long time. And it should be a whole lot of fun. We also have the HLAA convention, which is held in Tampa this year, which is June 23rd to 25th. So as things keep changing, you know, in the flux of COVID, I definitely was not planning on going before, but now I'm kind of waffling and I'm thinking, well, maybe, maybe I can really go. We have a need for some committee members. We have formed a programs committee. And I'd like to thank Zoher Chiba and the other members who are on the committee for arranging this presentation today with Ed. We're also looking for some new people to be part of an advocacy committee with me. As you all know, I'm a died in the wool, very robust and outspoken advocate for people with hearing loss. We also are a member organization. And we'd like to remind everybody to renew your membership. You can just go on our website and click on the membership tab. And it'll connect you to our, I don't know what to call this, our PayPal account. You don't have to register for PayPal to renew your membership online. You can also mail it to us and the address on how to mail it in the directions or everything right there on our website. We would be remiss if we didn't give everybody a muffin from Bob's Astro as if we were doing it in person. And we want to remind everybody that what communication access is and communication access helps people with hearing loss the same way that ramps help people with mobility issues. And as a group, we're all reticent to speak up and ask for what we need. And we need to do that. And you need to think about not how well you do on your best day, but what happens when you are not doing well, because if you don't ask for your, the accommodation that you could need on your worst day, when you are in the place where you need it, it's too late to ask for it. And so then you're stuck. The ADA is 32 years old this year. And so it's a disability, it's covered under the ADA. You need to ask for your communication access. And everybody who's not providing it for us, it's very shameful. They can't say that they didn't know. And where should you have it? Everywhere. And I really want to point out to everybody the importance of communication access in healthcare. I currently serve on the patient advocacy committee at UCSF. I gave a presentation to John Muir Health last month. And I'm asking that they form a committee there. If you don't speak up, we're not going to get anything. And if you speak up, as I've spoken up and things don't change soon, it's time for all of us to sue them because that's what they'll listen to. That's it for today. We're two minutes over. We'd like to thank our captioner. As you know, it's the most important thing in the world for us that we have captions. So thank you very much for providing such wonderful captions for us. And they were perfect. And before we go, does anybody have one last thing to say? Any news? Something. Okay, so I don't see any hands. So I'd like to remind everybody that the walk for hearing is when our normal scheduled meeting would be in June. So we're not having a meeting in June. In July, we may have a brown bag picnic. But the meeting time for July is over the 4th of July weekend. So we won't be having a meeting then. So we won't be seeing anybody until August. And in August at the moment, we're planning on having a meeting presentation by Pat Dobbs about communicating with your spouse. So we look forward to seeing you in Alameda. And everybody can make a donation to the walk for hearing. The minimum donation is $15. And that's hardly even anything today. And I have to tell you, if all of you don't donate, it makes me feel really bad for all the work that I put into maintaining and keeping our chapter. So if you're sitting on the fence about doing that, just think about me and think about whether you think that the work that we do here is worth it. So thanks very much. See you all in June.