 Thank you, everybody, for joining us today on this, I believe it's the first day of fall. And I can tell it's fall because I'm cold. And I have a cold. I hope it's cold. So thanks, everybody, for joining me. I'm Gabie DeYoung. I'm an IT Accessibility Specialist with UWIT. And joining me today is Terrell Thompson. And he will introduce himself in a little bit. But I'm going to go ahead and start off this presentation. And today we're going to talk about video accessibility. OK, great. So who is impacted by inaccessible video? When we think about accessible video, we could be thinking about who and how individuals will be impacted by inaccessible video. So users who are deaf, hard of hearing, or who are in a noisy environment may not be able to hear the audio. And the solution for that is to provide captions. Now, most video players can display captions. And many video meeting platforms offer automatic captions or machine-generated captions. And those can be turned on on the fly. And in a bit, I'm going to present solutions to enabling captions in popular video applications used here at the university. So others who may be impacted by inaccessible video include users who are blind or low vision or are unable to see the video due to obstructions. And the solution to this barrier is to provide audio description. And Terrell's going to talk more about audio description, what it is, and offer some solutions later on in the presentation. Users who are unable to hear and see the video and audio will also be impacted. And the solution for this is to provide a transcript. A video transcript is a text version of the audio. And it usually identifies the speakers if there are more than one. And they can sometimes include contextual clues of non-speech sounds, such as music and laughter. And transcripts are also useful for searching keywords and allowing users to jump to a specific section of a recording. And there are other examples of people impacted by inaccessible video, including folks who don't use a mouse, maybe they only use a keyboard to navigate, or folks that use speech input to dictate documents or navigate in control systems. Other examples include individuals that use screen readers to access information and navigate, or individuals with low vision and depend on high contrast or a custom color scheme. And the solution to those barriers is to provide an accessible media player that can offer captions in multiple languages, also include audio description, maybe ASL interpretation, the ability to control the rate and the speed of the video, and as well as other customizable options. And Terrell's gonna talk more about an accessible media player that was developed here at the UW. So how the UW does accessible video. So just for clarification, we wanted to make sure we presented the differences between the accessibility offices and the responsibility when it comes to accessible video. So if a student has requested an accommodation, disability resources for students or DRS, they'll provide funding and support for captioning and audio description for course materials for that student. And disability services office or DSO will provide similar services for faculty, staff, and visitors to the UW. DSO also coordinates ASL interpretation and card captioning or in-person and virtual events. So any requests for those services will need to go through DSO. And accessible technology services, which is a part of UW IT, and is the unit that Terrell and I belong to. We provide internal grant funding for captioning and high impact videos in a proactive manner. And we also provide training and support for UW departments with regards to accessible IT. And I'll talk more about the grant funding captioning service more in depth later on in this presentation. Okay, so let's jump right in and we're gonna start talking about captioning. So I'm gonna demonstrate how to enable automatic speech recognitions, which I also sometimes refer to as machine-generated captions, in Zoom, Panopto, and YouTube. And it's important to note that these platforms use artificial intelligence to translate human speech into text captions, but they're not accurate enough to serve as an accommodation for folks who depend on captions. Even though the accuracy is pretty high on machine-generated captions, they lack the ability to convey context of what's happening and sometimes same as label speakers. Also, technical, medical, legal jargon, and other specialized terms may not be transcribed accurately. But with that said, when using Zoom's automated live captions, their speech-to-text translator is auto AI, and it's pretty darn good. So what we recommend is if you're hosting a live event using Zoom and someone has requested captions as an accommodation, it might be appropriate to reach out to the party who made the request and ask if machine-generated captions are accessible. You don't wanna assume that automatic captions are good enough for an accommodation. But if the party agrees, then you should be okay. But if they say no, then you must make arrangements through DSO to hire a human captioner. You wanna make sure that you give adequate time for DSO to hire a human captioner. Usually it takes about three weeks to schedule them. And also it's really good practice to announce to the entire audience at the beginning of your meeting or event that captions are available and how they can access them. Okay, enabling captions in Zoom. So UW Zoom accounts can use machine-generated captions for your meeting or webinar. And you can check to make sure those settings are there when you're logged into your Zoom account through your browser. So I'm gonna switch here and I'm gonna go to my Zoom account within my browser and I'm gonna check to make sure that I have automatic captions enabled. And to do that, I can go to on the left-hand navigation setting here, I can click on the settings menu. And then I have another sub menu here and to get closer to where I need to be, I'm gonna click on the link here that says in meeting advanced. And that's gonna take me a little bit further down this list here and I can continue to scroll. And right here is the option for manual captions and I can see that this is a toggle switch and it's changed its color from gray to blue. So I know that that's enabled. Okay, so to turn on live captions, you need to be logged into your Zoom client as the meeting host. And then the live transcript button will be visible to you on the Zoom toolbar down here. You can see a little CC. And this slide has a screenshot of a Zoom window with the live transcript pop-out window exposed. So, and then the enable button for live transcription has a red triangle kind of surrounding it as does the allow participant to view live transcriptions. So selecting those two options will turn on and off live captions and live transcript for your event. And incidentally, this is the same place that you would go if someone had requested an accommodation and you needed to assign access for a human captioner to provide captioning services. And in that case, you'd select the button that says copy the API token, which is right above it here. And that copies the token to the host's clipboard and then the host can paste it into the third party closed captioning tool. So we have live transcriptions enabled for this webinar and you're welcome to turn on captions by clicking the CC icon in your Zoom toolbar. Or you can also view the transcripts at the same time. The transcript pops out on the right-hand side and identifies who is speaking. Okay, so, just a second here. So once your meeting or webinar has been edited, it's gonna take some time to save that recording to the cloud. And when that process is complete, usually users will receive an email from Zoom with a link to that recording. And clicking on that link will prompt you to log into your Zoom accounts in a web browser. And it will take you to your recordings and we can also access them in your web browser as well by clicking on recordings in the left-hand navigation menu. And then I'm gonna select the video here that we're gonna do some editing on. So keep in mind that it does take some time for the automated transcript to upload to the cloud as the text speech engine has to process that information after the video is uploaded. So you're usually gonna get a second email from Zoom saying that the transcript has completed uploading to the cloud. And so that's a way that you can determine whether or not your transcription and service is complete. Or you can look in your web browser here and you can see a link here for audio transcript and it does actually have some content. You can also click on that and download the audio transcript, which is a VTT file. We'll talk about that in a little bit. So then selecting that video, this video will open it up in a new browser window. It's gonna populate here. So I've got my main video window at the center portion of the web browser here. And on the right-hand side, I've got these little text bubbles and timestamps. So you can edit this audio transcript by, let's see here, where do I want to do it? By hovering over one of these little text balloons. And when I do that, you'll notice that this little pencil icon appears. And if I hover over that little pencil icon, you'll see that the word edit appears as well. So clicking on that little pencil icon will turn this speech bubble into a text editable field. So from here, you can go ahead and make your changes. And Zoom does a pretty good job with punctuation and capitalization. So I'm just kind of looking mostly for maybe some repeated words or some misinterpretation of words. And once I've got that all situated and I want to save that, then I can click on this little checkbox here. When I do that, I'll get a notification that says transcript text has been updated in this green window right here. So that means it's been updated to the cloud. The other option is I can click on the X and it will take me back to my original text bubble and reject those changes. So if you want to see the changes, then you can continue to view the video from the cloud recording. And any fixes that you make in the transcript here will appear as being updated. So I'm going to switch. I've got to move these controls out of the way. All right, so let's switch and we're going to switch video platforms and we're going to talk a little bit about penoptile recordings. At this time, it's my understanding that it's not possible to have machine generated captions for live penoptile events. So if you have an accommodations request, you have to have a human captioner and you have to go through DSO to reserve that. But it is possible to have penoptile recordings captioned after they've been uploaded to your folder. And it's also possible to have those recordings captioned automatically using machine generated captions. Now you want to make sure that your video is edited for brevity, for time before you request captions. As any changes to the video, the length of video is going to affect the timing of when those captions appear on the screen. So first, I want to show you how to turn on automatic machine generated captions at the folder level when you save videos to your penopto folder. So this is my penopto instance here. And I'm going to go ahead and select my folder from the left hand navigation window. And it takes me to my folder view when I've got my videos here. And from here, I'm going to click on up in the upper right-hand corner here, there's this gear icon. When I click on that, it gives me some more settings here. And I have another sub-menu rather here. So I'm going to click on the sub-menu, link where it says settings. Now changing these settings is going to change everything that is saved in your folder. So anything that happens in this view is going to affect the entire folder. And if I scroll down to the bottom here, you can see captions and I have this option here from machine generated captions. So this is what I want to make sure is already selected. Now, so I'm going to go ahead and choose that. And then I'm going to close. Now, my understanding is that automatic machine caption has been enabled at the folder level, starting today. So I believe, is Laura Baldwin in the audience by any chance? She is not. She's not, okay, no problem, that's okay. So I just got an email about 30 minutes before this webinar stating that Laura Baldwin who is the service owner for Panopto that she enabled automatic generated machine captions whenever a Panopto video is uploaded to your folder. So that process should already be happening starting today, so that's pretty cool. Okay, so now I want to show you how to request automatic machine generated captions for videos that you already have in your Panopto instance, but that may not have, but may not already be captioned. So looking here in my folder, I can see there's a little CC icon that shows up just below the title of this video. So I know that this one has captions, but it doesn't show up underneath this video here. So I want to do is I actually want to request automatic machine captions for this video. So to do that, when I hover my mouse over this video, I get some different buttons here that I can choose from. I'm gonna select settings, I'm gonna click on that. And then I'm gonna click on in the left navigation menu here, click on captions. And then here is where I can request captions and it's a dropdown menu. So from here, you can select automatic machine captions. And then that's going to give you the basic captions. Now you can see I've got a lot of different other options available here as my dropdown. I order a lot of human generated captions for different departments and units and these are all tied to different budget numbers. So you might not have this many options here. And these are all, when you choose these options at all, it costs money because you were having a human captioner do the captioning behind the scenes rather than having it machine generated. But for those free captions, you can just select this option for machine generated captions and then select order. And then it will take usually depending upon your video a few moments for that to happen. Okay. So once your Panopto captions have completed saving to the cloud, you can use that machine generated captions as kind of like a draft. And then you can edit the captions using the Panopto video editor. So to do that, you wanna go back to your folder option in the cloud and select the video that you'd like to review. And I'm gonna choose this one down here because I've been working on that. So when clicking on that is going to open up your editor in a new browser window. And from here, I have more options in my left hand navigation window. Sorry about that. I was hearing some talking and that was me on the video. So in my left hand navigation window here, I wanna select captions. And here I can see my captions appearing on the left hand side. I've got my video in the main window here. But if I click on these, I can't really make any changes to the text. So it's not very intuitive to make changes to the captions in Panopto. So from here, what you need to do is you need to click on this little pencil icon again. And if I hover my mouse over that, you can see that the edit, the word edit comes up. So clicking on that, selecting that is going to open up the video again in the video editor. Then I have to go to the left hand navigation menu again, select captions. And now I can click into these text boxes and make my changes. I'm gonna go ahead and fix this real quick. Okay, great. So you can continue to work on editing your transcript. And you can see that Panopto actually does a pretty good job with punctuation and capitalization as well. So it's just a matter of going through and making sure that some of the words were actually captioned accurately. And once you're satisfied with all of your changes, you want to click the apply button. If you don't click the apply button, many of your changes are actually going to show up when you play your captions. It's still gonna show the old captions. So you wanna make sure to click apply before you post that. And then another nice thing about Panopto is if you click revert, it still remembers the original automatic machine generated captions as well. So if you make a huge mistake, you can always click revert and it will get you back to your original rough draft. And then you can go forward from there. Okay, so let's switch over to YouTube and review options for editing captions in that platform. So when you upload your videos to YouTube Studio, captions are automatically generated using automatic speech recognitions. You don't really have to change any settings within YouTube. This automatically happens when you upload. So I'm gonna go ahead and look at YouTube Studio. So I'm in YouTube right now, but to get to my videos, I can click on your videos link here on the left, my hand navigation, and that's gonna open up YouTube Studio. And that's the video editor that we're going to use. Okay, so I've got a couple of different videos here. And this is when I uploaded yesterday. So let's take a look at this one. And we want to take a look at, again, at the subtitles, which is in the left hand navigation menu. Okay, click on that. And then when you upload your videos to YouTube, you'll notice that it says add here, ADD. And what that means is if you click on that, you can add videos, I'm sorry, you can add caption to your video. So you can either, if you have a transcript, you can upload that transcript, or you could just start typing in and start populating the captions that way. But just like Panoptil and Zoom, it takes a long time for videos to be saved to the cloud. And it takes even longer for the transcript to be saved to the cloud. So what you can do is after you've uploaded your video to YouTube Studio, you could wait just a few moments. And this should change from add to the words duplicate and edit. Now, normally it should do that, but I think there's a bug in YouTube Studio. I just discovered this yesterday. So you'll notice that I uploaded this video yesterday. And it's only a four minute and 16 second video. And through experience, I know that the caption file should be completed uploading by now. And this should have changed from add to duplicate and edit. So I'm gonna go ahead and click on it and see what happens here. Okay, so I click on that. And what it wants me to do is it wants me to either upload a file or auto sync or I can type manually. But I don't wanna do that because it should have already created a template for me. So I'm gonna close that and I'm gonna click again. Let's close that one more time. See if I can get it to work here. Well, yesterday when I did that, it worked after trying two times. So there must be something going on with YouTube Studio. That's okay. I'm gonna go back here and I'm gonna select this other video and we're gonna edit this one. So I'll go back into subtitles. Click on duplicate and edit. And here we go. So here we can see, this is our video editor within the YouTube Studio application. And I've got some text here. And you can see that YouTube actually does a terrible job with punctuation and capitalization. So we're gonna need to do quite a bit of editing here. But if I click inside the text here, I'm not actually able to edit anything. So there's one more thing I need to click on. And that's this link here in the center that says edit timing. If I click on that, then it actually parses out the text into little speech bubbles. And it gives me timestamps of when the terms will appear on screen and when they will disappear off the screen. What I need to do quite a bit of editing here in order to make this accurate. So just give me a second here. Okay, so once I've made some basic changes, I can either save the draft or I can publish. I wanted to show a couple of different things that are kind of interesting and unique about YouTube. Down here you can see the actual text that is going to be presented on screen. And right below that, it's kind of faint for me. So I'm not sure if you can see it on your end, but you could see the waveform here of when this text is actually going to be presented on screen. So it's really easy to actually adjust the timestamp by clicking and dragging that block of text and moving it to that waveform so that it covers that time block of when the person is actually speaking. So it makes it really easy to do edits within this video editor. Okay, let me switch back here. Okay, so when automated captions are too bad to be edited, sometimes automatic captions just aren't that great. Maybe there's a lot of technical terms that were used during the presentation that were transcribed accurately, or maybe there's several speakers in a webinar and the automatic captions are identified to speaking or there could be other factors that contribute to poorly automated captions. So this is just a friendly reminder about the captioning service where we will caption your videos using real humans, not the free version. So this is actually the paid version where we have humans who will do captioning, accurate captioning, but we'll pay for it. It's free through our service. And accessible technology services manages this project and will caption UW video presentations without any additional charge to your unit. So application submissions are reviewed by ATS staff to caption highly visible, high impact, multiple use and strategic videos. And my understanding is there's quite a bit of funding that's currently available for this service, especially for captioning through Panoptile videos. So I highly encourage folks to apply for this service. And I've included a link to the service on the slide. It's right here. And there's also another link at the end of this presentation on the slide that's titled for more information. So for other videos, you might want to consider using a state contract that we have with three-play media for captioning services for integrations within YouTube Panoptile, but also other platforms as well. And that's $1.95 a minute for the standard rate on that. Okay, three-play media offers many choices. So this slide shows the different caption file options offered by three-play media and SML, SRT, WebBTT, these are all standard file types used by popular media players. However, the format for each one of these files is going to be slightly different. So the takeaway from this slide is that it's important to know which file format your video player supports as that's really going to determine the file type that you choose. And most caption formats are just plain text. So this slide shows you the format of a WebBTT caption file. And you can see it's just plain text with some timestamps on it, colons and stuff like that. And these caption files can be edited using a plain text editor, such as notepad plus plus, you don't necessarily need a fancy caption editor, but it makes it a lot easier. Any misspellings or any typos in the timestamp could really affect how and if your captions are going to be, and when your captions are going to be displayed on screen. So it's very important that the format be very specific. Examples of other caption editors include Amara.sub and SubtitleHorse. So which videos are the highest priority? So for sure, videos that are required viewing for individuals who need an accommodation would be high priority for captioning, but also videos that are likely to be required viewing for individuals who need an accommodation should be considered. So thinking ahead to what the needs may be. Other videos to consider include ones that are popular and that have a lot of hits or a beautiful lot or videos that are relatively new and captioning really should be part of the workflow and videos that provide critical content. So how do you prioritize your videos for captioning? Well, I'm going to go ahead and hand it over to Terrell at this point and he's going to talk more about that, Terrell. Thanks, Gabie. And before we do that, there have been a few, I'd say I think mostly comments in the chat about automatic captioning, for example, COVID being automatically transcribed as covert seems like a pretty major problem. And it just underscores the need to go back and edit. I mean, you have no control over that when it happens live, but if you're going to make the video available, you know, as a recorded captioned video, then, you know, definitely go back and look for those sorts of problems as well as punctuation, Michael Oppenheimer points out that YouTube does not add punctuation when they auto caption, Zoom does. And I'm told that Panopto does as well with the new auto caption being live now for all videos. It does also punctuate. So I don't know why YouTube doesn't. You think that they would be able to do that, you know, intelligently, but they've chosen not to so far. And then Colum asks, is there a way to export captions from Panopto? Do you know the answer to that? That's a great question. I don't know off the top of my head, but let me do some research and then I will put that in the chat. Okay, I know that I have looked and not found a way. So if there is a way, it's not obvious or not intuitive. Okay, so let me jump on to share. I'm gonna share my entire screen so you may get a little bit of clutter and noise, but hopefully everything is relevant here. And mostly I wanna talk about audio description, but I wanted to start with just a little bit of additional information about captioning and this idea of prioritizing that we actually have developed a tool. It's called YTCA, YouTube Caption Auditor that facilitates the prioritization within YouTube anyway. And if you are a YouTube channel owner or if you work for a department or a unit that has a YouTube channel and you know the person who owns that channel and has the ability to upload and manipulate captions and so forth on that channel, then let them know about this if they aren't already aware. We, first of all, the tool itself, YTCA is an open source tool that uses the YouTube data API to collect all sorts of data from YouTube about the videos on a particular channel. And we use that, we have a hosted version of that here at the UW that if you are a YouTube channel owner you can get access to and it generates some reports like the one that is shown in this screenshot here. I'm picking here on the Center for Neuro-Technology because they actually doing a good job. Most of the videos are captioned but they and anybody who uses this tool can see a list of all their videos. So in this case, they have 22 videos on their channel. They can sort by any of the columns and they can see whether that video is captioned or not. And so it's super useful if you just wanna find out which videos are not captioned sort by the caption column. If you only got a few then you see what they are go ahead and caption all those. If you have a lot of videos that aren't captioned and you need to start somewhere you can't caption them all at once maybe sort by views. So you get your most popular videos at once maybe sort by date. So you get your newest videos at the top and you wanna start with, I would say arguably those variables that the most popular videos should be captioned and the newer videos should be captioned and everything that you produce from this point forward should be captioned and then you can gradually prioritize and go back and caption the other things. But this tool really is useful for just figuring out what you've got and getting metadata about each of your videos that isn't readily available through YouTube itself. It's also worth pointing out that the YouTube data API where we get this data has a yes-no field for captioned and if the video is just automatically captioned using their own service the answer is no it is not captioned. So even YouTube does not consider its auto captions to be actual captions. And so that kind of is telling about the quality of those captions. It has to be either uploaded you have to have a caption file that you've generated separately and uploaded to replace the auto captions or you have to have gone in and edited the auto captions and saved them. And again, it may just be a matter of going through and adding punctuation but if you do that then that converts that variable to a yes. So you can get to this at tinyurl.com slash uw dash y tca. But again, it is protected by UW net ID and the idea is that this is a tool that's for YouTube channel owners. And so just reach out to me and I can get you access. Another feature of this tool on this website is that you can find out who the top channels are. So really useful as a channel owner to kind of see who are the leaders in this space. And I want to be, I want my channel to be like these guys. There is the 100% club, which is growing. The last couple of times we've given this presentation it was UW School of Public Health. They have been leaders in this space and have used this particular website and this tool to get to that level. But with 182 videos, they have captioned 100% of those which is really awesome. So I always want to give kudos to them and still do but now they've got some company, granted not 182 videos, but Center for Neuro Technology and I use them on the previous slide as an example and they had a couple of nos but they use this tool to figure out which videos they hadn't captioned and they went back in and captioned those. So they now are at 100% and environmental health and safety. Only two videos, but they have captioned both of those and so they are now in the 100% club. If you get access to the website, you'll also find out who's in the 90%, 80%, 70%, 60% and 50% club. So we're only sort of rewarding those who have captioned at least half of their videos. Those that haven't, we're not in the business of shaming them but hopefully you'll get to the point where you're in one of these clubs, maybe even the 100% club. We wanna see that grow to the point where it won't even fit on the slide. So I wanna move on then to audio description. So with captions, we've been talking about people who are unable to hear the audio content. So captions provide access to that audio content. If somebody can't see the visual content, then maybe they can get an idea of what the video is about. Maybe they can fully understand the video via its audio track, just listening to it or maybe not. It depends on the quality or depends on sort of the content that's in that video. Somebody who is blind and they're watching a video may need what's called audio description which is a separate narrative track that describes the visual content that isn't otherwise accessible via audio. So that's exactly what this slide says. It's a separate narration track that verbally describes key visual content that's not otherwise accessible. It goes by different names. Sometimes you'll hear it referred to as descriptive video just plain description. There's also another term that's important and that is extended audio description. And that is in order to describe what's happening visually, there needs to be a gap in the audio and the spoken audio so that you can squeeze that narration in. And if there is no such gap, if somebody is talking constantly or there's constant informative audio and no place to inject description content, then the solution is to pause the video, insert narration and then resume playback. And so that technique is called extended audio description. And it plays a pretty critical role in audio description because there are a lot of videos in fact that just don't have enough room to insert additional narration. So what we wanna talk about here in the time remaining is, first of all, how to prioritize your audio description efforts and then how to describe or talk about three different approaches to getting audio description and also talk about avoiding the need for audio description altogether. First of all, regarding prioritization, it essentially is the same. I think this is the same slide or a very similar slide to what we had related to captions that for either captions or audio description or any sort of video accessibility issue you need to consider the audience demographics. So if this is a video that is about accessibility or that features people with disabilities or is in any way disability focused, then the likelihood that that audience will include people with disabilities is pretty high. And so that would be a strong case for making sure that that video is accessible. But you can also prioritize by traffic, by publication date, as we describe when looking at YTCA. If the video is on YouTube, use YTCA to sort of help figure out the prioritization for your channel based on traffic and publication date. Also, with audio description, there is a unique question that doesn't really apply to captions and that is, does the video need description? Because not all videos do. If you watch the video with your eyes closed, question is, do you understand it? Or are there important details that you're missing by not being able to see that content? If it is a, it is a high priority if nothing makes sense with audio alone. And there are quite a few videos at the UW that fall into that category as it turns out. It is a medium priority if the video is generally understandable, but there are some critical details that get lost. And it is a low priority if some of the information is lost, but it really isn't critical information. Somebody can just listen to this video and they do understand it and they get all of the important content. So as you're prioritizing, it's important to consider those. Watch your videos or just, if you know your video's content, ask yourself, is this a high priority, a medium priority, or a low priority need for description? So I think it's helpful to look at a few examples. And I've got these already pulled up and we'll start with Together We Will. This is a video from the UW. Let's just watch a little bit of this. And somebody give me a Zoom thumbs up if you can hear the audio, because it's possible I forgot to check that box, but hopefully I didn't. All right, thanks for the thumbs up. So in the interest of time, we won't watch all of that, but what do you think? Go ahead and type in chat. Is that a high, medium, or low priority need for audio description? Actually, I'm gonna continue to play it while y'all are typing in. Oh, I see a lot of votes for high. Everybody agrees it is a high priority need. Obviously, you're just listening to this. All you hear is a nice piano solo over some orchestration, but there is important content. There's text on the screen. There are important visuals that you don't have access to at all if you can't see this video. And so this needs audio description. Here's one. This is the best of UW 2016. This is one that actually is produced every year, but I like to use the 2016 as a highlight. Let's just watch a little bit of this. So that's fairly obvious too, I think. It's similar to the Together We Will video that there's nothing here that is accessible. It's just music, but it really tells a story about all the wonderful things that the UW has been doing over the past year. Really important that that be described. And as it turns out, it was described and there's a link here. Video is also available with audio description. This is one way of delivering audio description. Have a separate described video and link to it. The only issue I take with this is that the link follows the video. So somebody may have already watched the video and gotten frustrated by its lack of accessibility before they discover that there's a link. So better to put the link up above the video rather than below it. Here is a video. This is one that we produced where it's obviously old, Michael K. Young as the president of the University of Washington at this time. But have a look at this and ask, is this a high, medium, or low? And as you're watching, go ahead and type your answers in chat. We are committed to the notion that everyone should have an opportunity to participate in higher education, whether it be from the learning perspective or the research perspective or an opportunity to work here at this institution. We benefit from that because we get to enjoy the talents and the skills of those people who come in and also their perspective, which in many cases will be different from the perspective of others on campus. So accessibility becomes a very important value at the university. We're a leading university globally. We want the best talent in the world for our students, our... So what do y'all think so far? Is this a high priority, medium priority, or a low priority? Okay, I've got a lot of votes for mediums. So it's not quite as inaccessible as the previous couple of videos that we've looked at. It's arguable, it's debatable perhaps, whether that is medium or high in that if you're watching this video, you have no idea who just said all those profound things that Michael Young said. There's a lot of good content in there and it could just be anybody off the street. It has a lot more credibility if you know that this was Michael K Young. You are committed to the notion that every president of the University of Washington. So just that little bit of information identifying the speaker and their affiliation is really a critical piece. And so it could be argued that this would be high, a high priority, but also it is more accessible than the two videos that we watched previously. Here's another example that we produce. This is a video called Teamwork, making IT accessible at the University of Washington and statewide. What do you think of this one? Medium, high, medium, or low? My name is Cheryl Burgstahler and I direct accessible technology services at the University of Washington. And through our Access Technology Center and other services, we're making sure that the IT that we develop, procure, and use at the University of Washington is accessible to all of our faculty, students, staff, and visitors. Either ourselves for our websites or with vendors if it's a commercial product. My name is Patrick Powell. I'm from University of Washington, Tacoma. My responsibility is technology. I'm the Vice Chancellor for Information Technology. So I'll just go ahead and answer that. In this case, this is not only a low priority, it is a no priority because we considered the need for accessibility upfront and tried to avoid the need for audio description. So if it's possible to do that as you're scripting the video, then that arguably is the best way to go about it. Then you've got one video, one version that meets the needs of everybody. The one thing in the previous video that needed to be described was the name and the affiliation to the speaker. And in this video, everybody identifies themselves the first time they speak. So that issue is taken care of. I want to go back to the president's video, the reflection on the best of the UW. That was audio described. And let's have a look at the difference. This is a version that was professionally audio described. Words appear. Hashtag best of UW 2016. The Nobel medal next to David J. Tholes. 2016 Nobel Prize in Physics with President Obama. Mary Claire King. National Medal of Science. UW and Microsoft break record for DNA data storage. So for the most part, the narrator is just reading the on-screen text and that provides access to all the content. But there are some places where there's key visual information that gets added as well, such as with President Obama. That was not, there's no on-screen text that says President Obama is in this picture, but the audio description provider chose to add that because it's a key detail. So that brings us to the three approaches. How do you do audio description? One is to hire a traditional audio description service provider. So the video that we just watched was described by a company called Audio Eyes. We do quite a bit of work with them when we choose to hire a traditional audio description service provider. The second method is to hire a captioning vendor, such as three-play media. They do a lot of our captioning work. We have a state contract with them. They also do audio description service now, but they do it a little bit differently than what a traditional audio description service provider would do. A third way is to do it yourself using a time text file. So let's quickly look at each of these methods a little deeper. First of all, a traditional audio description provider, such as Audio Eyes, they're gonna use professional voiceover talent to narrate, and it's gonna be professionally mixed. So they actually will duck the audio down a little bit while the description comes in and then balance the various tracks so that you can hear the description, but you can also still hear background audio content that is important. So it tends to be a lot more sort of seamless and just overall better quality. The typical deliverable from a company like this would be a separate audio described version of the video. So you send them the video, they add audio description to it, they send you back a described version, and then you can link to wherever you show the video, you link to the described version just as the president did on her blog in 2016. The typical price range is somewhere between 10 and $15 a minute, depending on the complexity and which vendor you choose. Also, if they need to pause, if they need to do extended description, that typically costs more. There's a process involved in doing that, particularly if they try to do it well. They may loop the audio so that there's a seamless background audio while they're pausing, which just adds to the professionalism, but that tends to cost a little bit more if they have to do that. We have a shortening list of providers. I've mentioned audio eyes. There's also Georgia Tech does this as a city CIDI Center for, I can't remember what the acronym stands for, but they're another provider and WGBH, the public television station out of Boston, actually invented this technology, so they do it as well. Those are three providers that are listed currently on our description page. It used to be a longer list, but there's been mergers and consolidations and so forth. So check that out. And here's an example that we just looked at of using approach number one. Approach number two is to use a captioning vendor, three-play media I mentioned, Automax Sync is another competitor of three-play media. They also have entered this space. The cost if they do it is slightly less. Three-play media, last time I checked, I think this is still current $7.50 a minute as opposed to the 10 to $15 range. It is $11 a minute for extended and their additional costs for expedited requests. But there's another factor and that is captioning is an integral part of their process. And so they have to caption your video even if that's not what you're requesting. So they caption it first, charge you for captioning and then add the description on top of that. The output uses synthesized speech, which is another main difference from what a traditional audio description provider provides. But again, the typical deliverable is a described version. They also have a lot of different options if you're ordering through their website. You've got choices of voices and many other features that you can choose from. One nice thing about this is it's a much more straightforward process if you're working with a traditional vendor, then there's a lot of sort of back and forth communication and extending emails and sort of talking about what needs to be done whereas this is just use their dashboard and get the job done. The third approach is a web VTT file gave you showed this for captioning. It's the same file format, but instead of caption text, you have audio description text and that gets read by the browser in a synthesized voice at the appropriate time in the video. The advantage of this is that it's super easy, especially if you don't have a lot of caption text, you can just open up Notepad and type in your description file. It also is the official way built into the HTML5 specification. There's a track tag with kind equals descriptions and a source that would be a VTT file. And the idea is that web pages with an embedded HTML5 video and the track tag would be able to handle audio description natively. The only problem is they don't do that currently. You have to have an alternative accessible media player, the ones that are native to the browsers don't support this yet. Also, only one video is needed. You don't need a separate version. The browser, the player handles the descriptions. You don't have to have a separate described version and extended audio description doesn't require a separate video either. The player can automatically pause the video when description is happening and then automatically resume playback when description is finished. So the disadvantages are the big disadvantages. This only works in Able Player, which is the player that we created. So this is a link and there's another link at the very end that takes you to Able Player. It's an open source media player that fully supports description. And audio description is an art. If you should only do this if the needs are simple. People who actually do audio description for a living have trained many years on what to describe, what words to choose, and there are some subtleties there that are very important. And it really is best left to the pros. Unless you just have something, something really quick and dirty that this visual needs to be described, you can do that quickly on your own. Also, Panopto does support this, as Gaby mentioned. In the same way that captions are supported that when you have a caption editor in Panopto, you also have an audio description editor. And if you've just got, you've got a lecture and maybe the lecturer demonstrates something but doesn't describe what they're demonstrating or there's some other key visual information, really easy to just at that moment, type in a description of what's happening. And voila, you've got audio description that automatically gets played to the user who turns on description on their media player within Panopto. So check that out wherever it says captions and you can edit your captions. There's also an audio description option. So I have a slide here with links to more information and we will be sending out the slides afterwards and making those available. So do check out these resources. And I think I am at time or over time, but if anybody does have a burning question for either Gaby or I, happy to answer this. We don't have any additional questions in the chat. So if you want to unmute, if you have any questions, you can do that as well. The recording will be available. I think, Annemarie, you send that out to everybody who registered, correct? Or you can look at the link that is posted in chat. All right, well, thanks everybody for coming today. Hopefully you learned some stuff. Let me, I'm just gonna keep this on the opening slide, but there's my email. If you do want access to the YTCA website, so if you're a YouTube channel owner, reach out. I should also mention that we have a program right now where we are offering free captions for the top five videos on your YouTube channel to find however you choose to define top five. And so if you haven't already taken advantage of that, reach out and we can set that up as well. All right, well, enjoy the rest of your day and enjoy the rest of the season. Starts today, but have a good fall.