 session. And we are so that's wonderful. Great. So welcome and good afternoon everybody to accessible technology services webinar series on video accessibility. My name is gay BD young and I'm a member of the it accessibility team and today Terrell Thompson and I are going to present several topics on excuse me video accessibility, including captioning audio description and accessible media players just to name a few. So we've got a lot of content to cover this afternoon so let's go ahead and jump right in. Okay, so when we when we think about accessible video, we think about who will be impacted by inaccessible video, certainly users who are deaf, hard of hearing, or otherwise unable to hear audio will be impacted, and the solution to this is to provide captions. And this is an easier task to do than one might think. And it's an available option in many video platforms, and in a bit I'll go for steps for captioning video and zoom punnopto and YouTube. Others who will be impacted by inaccessible video include users who are blind and have a visual impairment or are otherwise unable to see video. The solution for this barrier is to provide audio description and Charles going to talk a little bit later on about audio description and offer some solutions later on in the presentation. Users who are deaf and blind, and are unable to hear and see the audio and video will also be impacted on the solution for this is to provide a scramb a transcript rather. And users will most likely consume this information using a refreshable Braille display and a screen reader to access the text. Transcripts are great and useful for searching keywords and allow users to jump to specific sections of recording based on that search. There are other examples of individuals impacted by an accessible video, including folks who don't use a mouse they only use a keyboard to navigate, or speech input to dictate documents or to navigate and control systems. Another examples include individuals that use screen readers to access information and navigate or individuals with low vision and depend on high contrast or a custom color scheme. And solutions to these barriers would be to provide an accessible media player that can offer captions multiple languages, audio description, ASL interpretation, control of rate and speed as well as other customizable options. And Tara's going to talk more later about accessible media players and accessible media player that was developed here at the UW. And just for clarification, we wanted to make sure we presented the differences between the accessibility offices and responsibility when it comes to accessible video. If an individual has requested an accommodation, disability resources for students will provide funding and support for captioning and audio description for course materials for students. And disability services office will provide those services for faculty, staff and visitors to the University of Washington. Assistive technology services provides internal grant funding for captioning high impact videos in a proactive manner. And we also provide training and support for UW departments with regards to accessible IT rather. And I'll talk more about the grant funded captioning service more in depth as we get into captioning. So more about captioning and hopefully we have captions turned on for this presentation as well. So how to caption videos. So for the next few minutes, I'm going to talk about captioning and show you techniques for enabling automatic speech recognition in Zoom, Panopto and YouTube. It's important to notice that automatic speech recognition captions or ASR. They may not be accurate enough to serve as an accommodation for people who depend on captions, although the accuracy may be really high. The ability to convey context of what is happening in the meeting and can often mislabeled speakers. Also, technical, medical, legal and other specialized terms are not often automatically transcribed accurately. So that said, zooms ASR is really, really good. And in some situations, individuals prefer zoom, the automatic zoom captioning over human captioning. So it may be appropriate to reach out to individuals who request accommodations and ask if automatic captioning is acceptable. If you request then you're good to go. But if they say no, then you should make arrangements to hire a human captioner. And I'll also also show you tips for editing caption files using caption editors. So let's go over some steps and zoom first. It is possible to have live automatically generated captions for your meeting or webinar, depending upon the type of zoom account that you have. So we're going to go over how to make sure that you have the correct settings enabled for displaying captions for webinars or meetings. But keep in mind that these settings do need to be configured well in advance before your meeting or webinar, and you'll need to make sure that you're logged into your zoom account through your web browser, rather than through the zoom client itself. And this slide shows where to find the settings for cloud captioning. When you're logged into your account from a web browser, you want to select, excuse me, you want to select settings from the left hand navigation menu. And from the meeting tab, you want to click on in meeting advanced to expose those controls. And if you scroll down to the bottom there a little bit, you'll see a toggle switch for closed captioning. Make sure that you turn that on and select the checkbox for enabling live transcriptions to appear in the side panel of the zoom window. And you also want to make sure that you toggle on save captions, which allows participants to save the transcripts to their local computer. So, performing these steps only enables captions to appear during the live zoom meeting. It doesn't allow for captions to be saved to the zoom cloud recording so either there are some extra steps for that. So, if you want to save your captions and the zoom cloud recording, you have to enable enable zoom audio transcription. To do this, when you're in your zoom account, you want to select the recording tab and click on the checkbox that says audio transcript, and this will automatically transcribe the audio of a meeting or webinar that you record to the cloud. So, one of the disadvantages of captioning is that it's not, it breaks off when you go into breakout room so you do have to restart that captioning again. And if you do have somebody who has requested an accommodation, and you're anticipating breakout rooms, you know you're only going to have one captioner available so if you have two people who need captions for the breakout room. You need to make sure that you have two captioners who can follow each one of those individuals into the breakout room. Okay, so now that you've got all of your settings all set, then you'll need to turn on the captions in your zoom meeting or webinar and to do that. If you click on the zoom toolbar, you'll see a live transcript button it's only visible to you as the meeting host. And if you click on that icon, then then this little pop up window peers. That's shown on this slide here. And then from this point you can select enable auto transcription. These are the same steps that you would take if someone had requested an accommodation, and you needed to assign third party access for a human captioner. Excuse me. If you decide to record your session, you'll be presented with options either save locally or save to the cloud. Excuse me and you want to make sure that you save to the cloud as that will give you access to the transcript and you can make changes later on if necessary. Let's see. So once your meeting or webinar has ended, it will take quite a bit of time for the recording to be saved to the cloud. When that process is complete, you should receive an email from zoom with the link to the recording and clicking on that link within the email will prompt you to log into your zoom account your web browser, and we'll take you directly to the cloud recording. And this slide shows you the account page that the link will take you to. Now I want to point something out there. Notice in the lower left hand corner, I've got a little arrow pointing there. And you can see that the audio transcript is still being processed. And what this means is that even though the recording is complete, the transcript is not yet complete its process of saving to the cloud. It takes a long time for the transcript to automatically upload. But when that process is complete you'll receive another another email from zoom, notifying you that the audio transcript is available. So you can click on that link within that email and that'll take you to the same page, but this time it will show you the file size of the audio and in this case it's probably two megabytes. So if you click on that, then then you're able to just click on the play button in the center of that film icon there. And then that will open up the recording in a new browser window and give you the ability to edit the transcripts. So the video portion takes center stage with captions just below the video, and the transcript appears popped out to the right hand side there. And this slide shows a screenshot of that cloud recording with the closed captioning and audio transcript revealed. Now it's possible to edit the audio transcript just by hovering your mouse over those little text balloons until a little pencil icon appears and you can see it here on this slide I've surrounded it in a red square. And that'll show up in the lower right hand corner of the text bubble. So if you click on that little pencil icon that text bubble turns into an editable text field, where you're able to make changes to the content. And once you're satisfied with that with your changes you have the option to either select a check mark and save your changes, or you can select the X and reject your changes and go back to the original text that was already there. So changes in this way only changes the transcript it actually doesn't change the captions. Can I interrupt, gave you. Yeah, sure. We have a teaching moment of our own here that we have a few slides ago you sort of walked everybody through the process of how to enable captions within a zoom meeting. And here in this particular zoom meeting I'm host and I've been following those exact procedures. And when I get to the next slide. Fast forward there you go. I click CC I see exactly what's shown here on this slide but when I click and when I click enable auto transcription. I get a message that says automatic transcription is now on. But it doesn't actually seem to be on so it seems to be broken at the moment and and so we're going to do some live troubleshooting. And this will be great for everybody I think this is the first time I've ever experienced this it has always been reliable from my experience. So maybe there's something happening, or it could be something wrong with my zoom client so I'm going to make Anna Marie host. She is then I'm going to make me co host so I don't lose all my privileges, or I only lose them temporarily. But then she can test and see if she can turn on captions and if she can that is a problem with my zoom client, if not, then it's a problem somewhere further upstream. So anyway, we'll explore that and if captions get enabled you'll you'll know what the outcome was and if not, you'll know what the outcome was. Unfortunately, nobody in registering nobody said they actually needed captions. So, so it's not. And this is, you know, yet another reason why it's important to have human captionists. If you actually have somebody who needs captions as an accommodation computers don't always behave as you think they will. Okay, great so we're kind of working on that in the background it sounds like. Okay, so let's go ahead and fast forward oops I fast forwarded too far. So let's talk about enabling captions, excuse me in penopto cloud recordings. So this slide shows you how you can enable your captions in your penopto cloud recordings. When you're signed into your penopto account make sure that you're in the my folder view. And from here you'll want to click on the gear icon that's located in the upper right hand corner of the window and you can see I have a little arrow that's pointing to that gear. And that will open up an overview window it'll just give you just fairly basic information. So let's select the settings item on the in the left hand navigation menu, and this will display more options and if you scroll down a little bit you can see that there are, there's an options for captions that has a drop down menu. You want to select automatic machine captions, and then that will excuse me that will automatically generate captions when you save your penopto recordings to the cloud. And just to save from here all you have to do is just exit out of this window. Once your penopto video has completed saving to the cloud, you, you'll be able to edit the captions within penopto. You can go back to my my folder view, and you'll see a list of the videos that you have saved in the cloud. Just select the video that you'd like to review. And this will result in the video opening up in a new web page in the penopto caption editor, which is what we see here on this slide. Now, in the upper right hand corner, you'll notice there's a pencil icon, and a place to red square around that icon to easily identify it. When you select that icon it will allow you to edit the captions and the caption editor. However, you need to perform one more step to make things a little bit easy on yourself. On the left hand navigation menu, you need to select captions, and I've enclosed that in a red box as well. And this will display the captions and the time stamp that you see showing up to the left of the video preview window. And from here, you're able to do a variety of things. Most importantly, you can edit the transcript quite easily. You can also see the waveform, which is below the video portion. So that's the waveform of the audio, which can make it a little bit easier to line up the 10 time stamps of the audio. And also you can see when the slides are introduced along the waveform during the presentation. So once you've made your changes and you're satisfied with the changes that you made, you can go ahead and select the apply button at the top of the window. And this will update the transcript and will be reflected in the captions that appear when the video is played. All right, let's talk a little bit about editing captions in YouTube. When you're uploading your videos to YouTube, captions are automatically generated using ASR. So you don't really have to change any of your settings within YouTube in order for this to happen. It just happens automatically when you upload. And this slide, excuse me, shows a screenshot of videos that were uploaded to my YouTube account. And at the very bottom is my original video, the original video that I had uploaded to my YouTube instance. The other item above it, I'll get to that in a bit. But I want to point out there on the right hand column there, you can see link text that says duplicate and edit. When you first upload your video, that link will say add ADD. Now remember in Zoom, it takes a long time for videos to be saved into cloud and it takes an even longer time for the transcripts to be saved to the cloud. Well, the same thing is true for YouTube as well. So if you click on that link when it says add, that will allow you to start typing and create a transcript from scratch. Here you can just wait until that text has changed to duplicate and edit. And then you'll be able to go ahead and edit the transcript that was automatically created. So clicking on that duplicate and edit link will open up the video in the caption editor as we see here on this slide. So one modification that you need to make in order to get it to this particular view where you can see the transcript and the timestamps. So if you look at the upper middle column here, you're going to see, you'll notice that more link text that says edit as text. So if you just open up the video in this view, this left one will show a block of text with no formatting and no timestamps. It's just, just pure text. And the link in this middle column will say assign timings instead of edit as text. When you click on that link text that says assign timings. So this is to this view where you can see text bubbles and the start and stop times of when the text will be presented on the screen. Excuse me. And in the window, just below the video. Again, you're able to see the waveforms of the audio. And just above those audio waveforms are blocks of text. And these blocks of text and slide the blocks just slightly left or right to move the timings ever so slightly and it's really easy to do. So makes makes editing a lot easier. So when you're working on your captions you want to make sure that you save your draft. Because this will allow you to come back and work on the video at another time. And when you do that this working draft appears in your video list as a duplicate of the original video that we saw in the previous slide. So when you're satisfied with your changes you can select the publish button, and that will update the changes and issue them immediately. So what do you do when the automated captions are too bad to be edited, which can happen sometimes. Accessible technology services will caption a limited number of UW video presentations without charge through a captioning service supported by UW it. So you can charge to apply for funding to caption highly visible high impact multiple use strategic videos that are used several times in a course, or have a lot of very important information. And I've included a link for this service at the end of this presentation on the resources slide in case you want to learn more about that. I might want to consider using the state contract with three plain media University of Washington as a contract with three three three play media for captioning services and provides integration with YouTube. Panopto and other platforms as well, and I've included a link for that on our resource slide to. So this slide shows the different caption file options offered by three plain media. These are all standard file types used by popular media players. However, the format for each one of these file types is going to be slightly different. So the takeaway from this slide is that it's important to know which file format your video player supports, as that's going to determine the type of file that you choose. So this slide just shows us a format of a web TT caption file, you can see it's just plain text for the timestamp associated with when the text will appear on screen. It's got a couple of dashes there and and a bracket. And you can edit these files just using a really simple plain text editor such as notepad you don't really need a fancy caption editor to adjust the captions but it certainly makes things a lot easier. And you if you are editing these types of files and something like no notepad, you know, it's very easy to make simple mistakes such as entering a wrong number, or having an additional space or using a semicolon instead of a period. And that can really compromise your, your captions so you want to be very careful about that. So there are other captioned editors available that are free. So this is one of them dot sub and subtitle horse is another one. So which videos have the highest priority when you're considering captions will certainly videos that are required viewing for individuals who need an accommodation would be high priority for captioning, but also videos that are likely to be required viewing individuals who need an accommodation should also be considered. So you're thinking ahead about what the needs may be other videos to consider include ones that are popular and viewed a lot. Videos that are relatively new and captioning should be part of the workflow and videos that provide critical content. So how do you prioritize your videos for captioning. I'm actually going to hand it over to Terrell, and he is going to talk more about that. It's gave me let me share my screen. I think actually I'm going to share my entire desktop because I've got browser window to share as well as PowerPoints. So in the browser window. This is a tool that we created called YouTube caption auditor, why TCA or that's the tool behind this website. And there are 88 known YouTube channels at the University of Washington. I think there are probably others that we don't know about, and we're able to use this tool to engage with the YouTube data API. And, and then that returns all sorts of data about the videos on on the known channels all we have to do is feed it a channel ID, and then it gives us a bunch of data. And so that's enabled us to create this table that sort of compares YouTube channels on you know how how many videos they have and how much captioning they're doing in those videos among other things there's a lot of data here about traffic and so forth. The red channels are the channels you don't want to be those are the ones that haven't done any captioning the green channels have captioned 50% or more of their videos and the white channels are somewhere in between so you want to be a green channel. As we see down at the bottom overall performance 29% or 29 of the channels that 33% are green channels as the ones that are doing well. And only five channels have not yet started so that actually is pretty good and the overall 43.2% that's the overall percentage of videos out of all 88 channels out of 10,000 videos. 43.2% of those have been captioned and since we actually started tracking those just a few years ago we started at 7%. So, so this is growing. It's wonderful. We're glad to see it's growing but we still have 43.2% could obviously be better. So this is behind a U of net ID and it's only accessible to specific individuals have been granted access, but if you are a person who owns a YouTube channel or has some influence over your departmental YouTube channel then just let me know TFT as my U of net ID and my email. And I can that grant you access. The other thing in addition to sort of comparing where you are relative to other YouTube channels, you can look at details about your channel I'm going to pick on the center for sensory motor neural engineering they actually doing pretty good. But you get kind of a summary overview at the top, but then what is really helpful for prioritization is you see all your videos so they have 22 videos. And you can sort this table by any of the columns so by default it's sorted by alphabetically, but a good way to prioritize might be views and so we click on views. And look at the largest, the most views first we see that there's a video called what is neural engineering that is by far their most popular it's got over 6000 views. And it actually hasn't been captioned. So it's an older video so we might want to consider, you know, date and views together. So as we're prioritizing, but certainly a video of their most popular video even if it's older should be captioned and so that would be probably their top priority, you know the one that they should focus on. As we look then down through the list, sort of by views, we see that there are just a couple of others within the list actually this is the entire list so only three videos total that haven't been captioned. So, so that would be really easy to just knock those three out, but certainly starting with the one that gets the most views. And then if you got more videos and 22 and more to caption, and you need to sort of prioritize then I would suggest some combination of views and date. But once again, just let me know if you want access to this and I'll be happy to provide that and feel free to spread the word to this would be a really useful tool for prioritizing. So, I want to talk about audio description. And actually, maybe I should pause before we do that because this is a quite a break in topics on very different sort of topic than captioning. Are there questions for gave you about captioning before we proceed with audio description. Feel free to either type a question in chat, or if you just have if you just want to unmute and and ask, I think we're a small enough group to do that too. Terrell I just wanted to point out that Sarah would have said in chat that captioning is not available for zoom for the School of Medicine. So, due to HIPAA concerns so I just wanted to point that out. And also I think you mentioned, but in case you didn't also not available in breakout rooms, right, right. Sounds like no no other questions about captions. Okay, so if we consider these sort of siblings, you've got captions, and you've got audio description captions, although they benefit a lot of people. You have one one group in terms of accessibility and disability, people who are deaf or hard of hearing need captions because they can't hear the, the audio. Audio description is for people that can't see the video. So it's two different groups to different user groups and two different features that benefit those groups. So when somebody can't see the video, then often they can understand the video just by listening to its audio. But the question is, is there anything in this video that you have that is not understandable, because it's visual only. So when you get visual only information, then that information somehow needs to be conveyed to people that can't see it. And that, you know, primarily we're talking about people are blind, maybe people are low vision, but it could also be somebody who's sort of watching the video but they're actually doing other things. So this happens often where, you know, you got a video, you know that you're trying to catch up on trying to learn from whatever, but you also have work to do and so, so the video is sort of peripheral maybe even on a side monitor, maybe even in another room. And you need to be able to access that content. So video description is a one of the terms by which this feature is known, but sometimes it goes by other things like descriptive video or description or video description. So that can be a little bit confusing, but probably if it has the word description in it, then we're talking about the same thing. It basically is a separate narration track that verbally describes key visual content. And that is here on the slide that we're going to talk quite a bit about and so this is important and that is extended audio description. And that is the way audio description works is you've got a video, and you have something that needs to be described because otherwise the video has some inaccessible content to people that can't see it. So that description needs to be inserted into the video, and there are different ways of doing that which we'll talk about, but some videos have so much spoken audio that there's really no place for the description to be inserted. So what needs to happen then is the video needs to pause while the description happens, and then it needs to resume after the description is over. So that process of having extended audio description is when you pause the video in order to describe something, and then resume playback. So, and there are different ways of doing that. So what we're going to talk about during this segment is is once again prioritizing, determining which videos are most in need of audio description because they are all the same. Some, some require audio description more than others. And then we're going to talk about how to actually do this. We're going to present three, maybe four depending on how you count them approaches to audio description, and also avoiding the need altogether for audio description. On prioritization, it basically is the same as captioning. You know, you look at at your views, you look at the publication date and saying you can use YTCA, although YTCA reports from YouTube on whether a video is captioned or not so you get that yes no field. It doesn't know which videos have been audio described. So it can't help with that, but it can help with prioritization because you can see which videos have, you know, the most traffic and and their publication date and you can use that table. And then, you know, click on the name of the video and that actually opens video YouTube so you can then watch the video and see if this needs description or not. Another item here in prioritization on this slide is audience demographics. If you know that there are people with disabilities or people who particularly would need audio description in your audience. Then that would be something where you certainly would want to focus on getting those videos described. And then that basically the idea is, you know, watch the video with your eyes closed, or just imagine, you know, somebody is watching this and they can't see the visuals, then what are they missing what are the important details. And I'd like to just sort of break it down into three priority levels. A high priority need for description is when nothing makes sense with audio alone. And the high priority is if the video is generally understandable just listening to the audio but there are some critical details that are lost. And a low priority is when some information is lost, but it's probably not critical information in the great scheme of things. So I've got a few videos that I wanted to show. And we can just think about what, which of those three priorities is this particular video. So let's start with together we will think that's here. So this is on the UW channel. So let's just watch a little bit of this. Oh, and I apologize I forgot to turn on my audio. So I'm going to do that. I'm going to stop share. I'm going to share again, but I'm going to share sound at this time. So you actually will hear what's in my headphones. So I think the answer is obvious here. Those of you who are able to see this video get a lot more out of it than those that don't those that can't see it. This is just a music video it's some very nice music, but not no message here other than there's some music. So this is a high need for audio description. So let's look at this is actually a similar video. This is the best of you to 2016. This is on the president's blog. So we could go on and on with that. Obviously, the same as the together we will video it's all music there's actually no spoken content, or nothing that's narrated nothing that's verbalized. So you can either screen either images or on screen text both of these videos had on screen text, but that's not accessible to somebody that can't see it. But this, this video is different than the other in that there actually is this link underneath it says video is also available with audio description. You can follow that link, and that takes you to YouTube and watch them the same video. But this is the audio described version. Words appear hashtag best of you W 2016 the Nobel medal next to David J. Tholes 2016 Nobel Prize in physics with President Obama. Mary Claire King National Medal of Science UW and Microsoft break record for DNA data storage, a collage of photos inaugural Husky 100 inaugural parents and family weekend. Obviously this version is much more accessible than the one that has no audio description so you can see how audio description is as important. Also, you actually listen to that notice the sort of the quality of the audio description that this is a voice over talent it's an actor reading, reading the text and reading the script is actually it's mostly on screen text but there are a few other details. President Obama appeared in a scene and many and the voice actually, you know, says President Obama. Whereas there's no on screen text that says that. But also, you know just that that human narration the quality of that as a, you know, sort of fits with the music. It's very important in some contexts, but depending on how, how important that is, there are other ways to do audio description doesn't have to be human narration. So we're going to talk about that as we get into the how to section. Let's look at another example. This is it accessibility. What campus leaders have to say this is a video that we put together for years ago, obviously it was when Michael K young was still president of the university. But let's watch a little bit of this. Again, ask that same question. Is this accessible to somebody that can't see it, or are there some important missing details. We are committed to the notion that everyone should have an opportunity to participate in higher education, whether it be from the learning perspective or the research perspective or an opportunity to work here at this institution. We benefit from that because we get to enjoy the talents and the skills of those people who come in and also their perspective, which in many cases will be different from the perspective of others on campus. So what, what do you think. Feel free to just shout out an answer is was this a high priority need a meeting priority or a low priority. And why any volunteers. Okay, I'll just tell you this this I would say is a meeting priority. You can hear what the person is saying, but it is just a person. You know, why, why should we care whether this person thinks it accessibility is important. It really is significant that this is Michael K young president of the University of Washington, and actually throughout this the entire video is a montage of and CIOs and high high level, you know, people at universities around the country. And you don't know that you don't know who any of these people are, as they're talking about it accessibility visual visually you see an onscreen graphic that identifies them, but that information is missing. So, this is critical information and it actually, because it is so isolated just a little bit of information here a little bit information there. It actually could be a different technique you don't necessarily need a human narrator to do that there otherwise to do that. Let's watch one more example this is a another video that we produced called teamwork making it accessibility accessible at the University of Washington and statewide. My name is Cheryl Burgstahler, and I direct accessible technology services at the University of Washington, and through our Access Technology Center and other services. We're making sure that the IT that we developed the next person to secure and use at the University of Washington is accessible to either ourselves for websites or with vendors if it's a commercial product. My name is Patrick Powell and from University of Washington Tacoma. My responsibility is technology. I'm the vice chancellor for information technology. So this is this is a case where this is really a low priority and actually even lower than low it's a no priority. We have avoided the need for audio description. Otherwise the only thing that would have needed to be described is that onscreen text that identifies who the person is speaking, but everybody who speaks in this video introduces themselves and includes not just their name of their title and affiliation. So, so everything in this video is accessible so there's no need at all for audio description in this case. So, obviously, you know some videos are medical than others for getting description. Now how do you do this, what sort of methods are there for getting video described. One, you can hire a traditional audio description service provider so that's what we heard with the best of UW the bats human narration. Two, you can hire a captioning vendor, maybe talk about three play media that we have a captioning contract with. They also do audio description and as an add on service that's an option. Three, you can do it yourself using a time text file, gave you also talked about the web BTT file format. That can be used not just for captions, but also for descriptions. And for this is the maybe have had students do it. And I'll talk about what I mean by that in a bit. So, first of all, hire traditional description provider. There are some links here and the slides are going to be available afterwards, along with the recording. And so you'll be able to access these links directly. On the on the accessible technology video accessibility page. There is a link to the American Council of the blinds directory of audio description service providers, and there are nearly 100 companies now that are in this business. And, but they vary in terms of the scope of the services they provide some are just for live description where they describe live sort of theatrical events that sort of thing. Some are only focused on really big production, you know, Hollywood type stuff, adding description there. And so we kind of narrowed the scope of that directory and we surveyed the companies that seem to fit. And what we came up with in the end where seven providers that seem to be just to sort of fit the higher education need. They can do description in a timely fashion just very small jobs compare comparatively, and at an affordable cost. And so, so seven choices there. And those are all linked on the making video accessible page. Professional providers do use professional voice over talent so they script the description and they read the script, and then they professionally mix it so that you've got description content that is balanced really nicely with the program audio. So they may duck the program audio a little bit when the script is happening and then bring it back up again, and it all flows really nicely together. So the typical deliverable at least for us is an audio described version of the video so like the President's blog. There's the non described version on, you know, it's embedded on the blog. But then there's a link to the described version that you know is a separate video that has description mixed in. That's typically the way this is delivered. And the typical price range is 10 to $15 per video minute so in that it varies, you know between those seven providers. But the prices are coming down quite a bit largely because three play has entered this market and driven the price down, I think. So it also varies depending on complexity and extended audio description if they have to pause the video while they're describing than that tends to have a higher cost associated with it. Just this example again with the video wherever the video, the original video occurs, there should be a link then to the audio described version. That's the simplest way to deliver this. Although the link to the I describe version should actually be above the video rather than below it, because by now, you know the person has already labored through watching the original video and discovering that it's not accessible. And then they continue on on the page and find that there actually was an audio description for described version all along. That can be a frustrating experience so you put it above the video so below. Approach number two is to hire a captioning vendor so again three play media automatic sync is who also does the same kind of captioning work that three play does. They both do audio description now. The cost is slightly less three play media charges $7 and 50 cents a minute for standard $11 per minute if extended is needed, and then the price goes up if you have an expedited requests. The one gotcha there. Well, there are a few things that separate what they do. One is that captioning is required, even if the video is already captioned. It seems to me to be kind of an inefficiency. I don't need captions I just need audio descriptions, but captioning is very tightly wedded as part of their process so they depend on the timing of captions in order to figure out, programmatically, where they can inject description. And so that just, you know, they've got a really efficient description process that is built around captioning since what they did originally was captioning. So, I've talked to them about trying about separating those and just offering another description as a separate service, but at this point they're not able to do that so you actually that actually drives the price up a little bit because you do also have to pay for captioning, even if you don't need it. The output also uses synthesized speech so you're saving on cost a little bit because they don't have human narrators who are having to get paid to do this. So, whether synthesized speech is satisfactory to users. There actually has been quite a bit of research on that. And, and the answer, it turns out is depends on the context that users are okay with synthesized speech, particularly in academic content. If it's a dramatic work, they they're happy with any description they can get because there's not that much of it out there, but, but they really prefer human narration if it's a dramatic work because a synthesized voice kind of gets in the way and distracts them from the production. So the deliverable. Now there are lots of choices but it can be the same, just an audio described version with that synthesized voice mixed into the, to the video. I'm getting a little low on time so I'm going to kind of hurry through some of these slides but I just have some slides showing what this looks like on the three play media website where you can choose you whether you want extended or standard and what your timeframe is and and then it has the costs associated with that. And then you've got because it's synthesized speech you have lots of choices in terms of the voices that you use. And I can, if anybody ends up using three play media for their audio description then talk to me about this because we actually have some some research to support what the best choices are when it comes to a synthesized voice and that depends on your content. And then there are lots of choices for output. But again, the best approach, the best output, which just works everywhere is to have just a separate described video and to link to both both versions from the other version. Approach number three is a web VT T file same file again. This looks like the slide that gave you had up earlier that had you know some caption text, but instead of caption text, you have description text, and the way this works is the browser reads that content. And provides the audio description. So I want to get out of my slides here I want to go back to this video which actually does have web VT T based description. I have to turn it on with the D button on the media player. And then I'll restart and you'll hear the description. This also has extended description built in. So it will pause while the description happens, and then it will automatically restart when the description is finished. So let's check it out. Michael K young president University of Washington. I'm committed to the notion that everyone should have an opportunity to participate in higher education, whether I'm going to speed Michael up a little bit or the research perspective or the president anymore. We benefit from that because we get to enjoy the talents and the skills of those people who come in and also their perspective, which in many cases will be different from the perspective of others on campus. So accessibility becomes a very important value at the university. Images of a teacher and students in classrooms and that computer stations text moves on a closed circuit TV words appear, it accessibility. What campus leaders have to say, Tracy Maitrena director of policy Cornell University. We're leading university globally. We want the best talent. So the nice thing about this approach is it's a web VTT file so it's super simple and gave you mentioned that you can do this in notepad and it's easier to do it in a captioning tool. But if you just have a few lines of description text as in this video, there's not that much that needs to be described, you can very easily do this in notepad and in five minutes you get your description. The catch though is that you have to be using a supported a media player that supports this. This is built into the HTML five specification. So this is the way that the W3C envisions audio description to happen. But right now able player is the only media player that supports it and that is the player that that we developed internally it is free it's open source. And you know it's available as a WordPress plugin a little bit root of entry at the moment but we're working on that, as well as a Drupal module. So, so it's out there and you know it doesn't what you're seeing now is able player. So web VTT description can be a solution. The advantage is it's easy it's built into the HTML five spec, only one video is needed that's the nice thing to the extended audio description happens automatically. Whereas if you have a video that is, you know, it has audio extended audio description mixed in. It's a longer video than the original video so the durations don't match up, which means you have to get both videos captioned separately, which can be convenient and add to the cost inconvenience and add to the cost. The thing to consider though is the audio description is an art finding the right words to say that don't distract you there actually is a technique to this that people spend a lot of time training to become an audio describer. So, so I don't want to say you know you can just do do audio description yourself that you know if it's more complex than just identifying the names of speakers and providing a little bit of description here a little bit of description there, probably should be sent out to some you know that to the professionals who do this kind of work. Option four is have students do it. And I mentioned this because there is a group of students at the University of Washington this arose out of an undergraduate course, an entrepreneur course. They're calling themselves video eyes, but they are doing automatic audio description using AI, which is really an interesting concept and I was skeptical first, but I've been checking out some of the some of their work. And I want to share this together we will video which I showed you at the beginning. But this is their version of that. So let's see what they can do with artificial intelligence. This is an extraordinary time. Now, more than ever, we all have a duty to look out for one another for each other is most vulnerable. So, pretty impressive stuff obviously in this case they're just using OCR says reading on screen text, and that is a synthesized voice but they've invested in one of the premium synthesized voices so it really sounds pretty natural. But they're they're venturing beyond this and actually have some, some, some other demos where they're able to identify, you know, what's happening on the screen and you know objects on the screen and that pretty pretty interesting work that they're doing. And so, so that you know you never know what you're going to get when you ask students for a solution. That brings us to the Washington state audio description project this is a project that we have just launched. And this is working with states, other state higher education institutions to get more video audio described so we're providing the funding for this. This is funded by the do it center, which is part of our group but we've got state of Washington money to do this project. And it. The goal is to work with these partner institutions to get high priority videos described and so we're going to provide support we're going to be the liaison for the vendors. And, and just see how much video we can describe between now and the end of the fiscal year. But so the University of Washington as a partner in this is one of the partners, we want to caption or we want to describe our own videos in addition to their videos. And so, if you have videos that you would like to, you know, do you feel our high priority, it should be described and would like to participate in this project then just let me know here's my email. So it's two things you can let me know about one is if you want to participate as a UW participant in this project. And the second thing is if you want access to the YPCA reports. So you can use that tool to help prioritize with your captioning and description efforts. The last slide we have here is just a bunch of the links that we have talked about the, the most important one is you don't use slash accessibility slash video. That's kind of our hub for accessible video information, and you can access everything else from that that website. So, I'll leave it at that. And we're pretty much out of time but I'm happy to entertain questions for anybody that wants to stick around we can run a little bit long that's fine. I'm going to stop share those like an access chat a little easier. And if anybody again if anybody wants to unmute and just ask the question, you're welcome to do that as well. There was a question earlier about captions, did you see that in chat maybe. I actually answered that. Okay, yep. Excellent. So Nancy wants to know I know you can provide specialized words to captioning services if you're captioning a video rather than a live script was that a comment to an earlier. Okay, trying to catch up. Sorry. I've actually been talking with the vote that student team put a bit and adding vocabulary is also an important thing for description. You know it's interesting, like the, the video that that was outsourced the president's video the best of you to 2016 it was outsourced. There were a few things in that outsourced description I thought it went really well overall, but there are a few things that if it was an internal job. They, they would have referred to, you know, certain landmarks, probably by their name, like, you know, says hello. And, you know, other other prominent sort of features of the landscape, as well as prominent people like the president. And as President Obama that they didn't recognize President Kase. So, you know, things like that, you know, they're their benefits to being able to provide some context to an audio description provider. And so, so with this AI based solution they actually are working on building that kind of thing and upload some context vocabulary or things that they should be looking for that the engine should be looking for when doing the description. So, that should be should be really interesting to see how that evolves. Okay, so I think that about does it again. Thank you all for attending. The recording will be up probably within the next week or two, as well as we still don't have the past recordings up yet, but we're going to get all these videos up at the same time along with the slides. So much for those on the accessible technology website. And come again this time next month, and we'll have another presentation, I forget actually what's on tap next time is it hotty getting a screen reader testing pages with screen reader. I think that might be the next one up. Yes, I believe that is correct. Awesome. Great. Well thanks everybody for coming. I am going to stop for the recording now.