 All right, thank you, Anna Marie. I'm gonna share my screen here. So just a moment while I get set up. Okay, great. So I just wanna confirm that you can see my slide there and it says video accessibility. Does everybody see that? Okay, great. Well, thanks for coming today, everybody. My name is Davy DeYoung and I am a member of the IT accessibility team and today we are going to be presenting video accessibility to you. And along with me is Terrell Thompson who's the manager of the IT accessibility team. So we will be presenting on topics related to video accessibility. So let's go ahead and get started. Okay, who is impacted by an accessible video? So when we think about accessible video we should be thinking about who will be impacted by inaccessible video. So let's see. So in certain situations there may be users who are unable to hear the audio, maybe they're deaf or hard of hearing or maybe they're in a noisy situation and they're just not able to hear the audio and that might be impacted. So the solution for that is to provide captions and I'll be giving you some information on enabling captions in different video platforms that are available to the UW and then also how to edit captions once your video has been concluded. Another thing to ask yourself is users who are unable to see the videos which may include users who are blind or have visual impairments or maybe somebody just is listening to the video instead of actually watching that. So the solution for these barriers would be audio description and Terrell's gonna talk more about audio description and offer some solutions rather later on in the presentation regarding that. And then there may be users who are unable to both hear and see the audio and video and that may be impacted. So the solution for this group would be to provide a transcript. And transcripts are useful because it allows folks to jump within the video to a certain place using keywords and they may also be consumed by individuals for using a screen reader or a braille device. And there are other examples of individuals who may be impacted by inaccessible video and those include folks who may not use a mouse. They use a keyboard only to navigate. Maybe they're not able to hold a mouse in their hand or have some weakness in their hand so they're only able to use a keyboard. Or that may include folks who are using a screen reader to access information. And they're not able to use a screen reader to access the video controls of the video player. Or folks are using speech input such as Dragon Naturally Speaking. So they use speech input to navigate around their computer. They talk to their computer in order to perform certain duties or to dictate text into the computer. Or you may have folks who are dependent upon an high-crown trust or some custom color schemes. And the solution to that would be to provide an accessible media player. And Terrell, again, is going to talk about an accessible media player that offers playback in multiple languages. It also offers audio description, ASL interpretation. And this accessible media player was developed at UW. And Terrell's going to cover more about that later. And just for clarification, I wanted to include information about the different accessibility offices at the university and what kind of responsibility they have when it comes to accessibility of videos. So if you have a student who has requested an accommodation, disability resources for students or DRS will provide the funding and support for captioning and audio description for course materials for that student. And Disability Services Office provides similar services for faculty, staff, and also visitors to the university. Now DSL also coordinates ASL interpretation and cart captioning for in-person and virtual events. So any requests for those kinds of services will need to go directly through DSL. And then Accessible Technology Services, or ATS, we provide internal grant funding for captioning high-impact videos in a more proactive manner. And we also provide training and support for UW departments with regards to accessible IT. And I'll talk a little bit more about the grant-funded captioning service more in-depth later on in this presentation. OK, so first, I'm going to cover captioning. And I'm going to cover captioning in Zoom, Panopto, and YouTube, but those are the three main video services that we use here at the UW. And I'm going to talk about how to enable automatic speech recognition. But it's important to note that automatic speech recognition captions or ASR, they may not be accurate enough to serve as an accommodation for people who depend on captions. And even though the accuracy is pretty good, ASR captions lack the ability to convey context of what's happening in the meeting. And sometimes it mislabels speakers. And then technical, medical, sometimes legal, and other specialized terms are often not transcribed accurately. Now with that said, Zoom's ASR uses Otter AI as their speech-to-text translator. And it's pretty darn good. So in some situations, I've gotten some feedback that individuals actually prefer ASR over human captioners. So our recommendation is, if you have an accommodation or request to caption an event or a meeting, it may be appropriate to reach out to the individuals who have requested captions as an accommodation and ask if automatic captioning within the Zoom platform is acceptable to them. If they say yes, then you can follow the steps for enabling the automatic captions and then you should be fine. But if not, if they do want to have a cart capture, then you should make arrangements through DSO to hire a human captioner. Okay, so let's get started. I'm gonna first talk about enabling captions in Zoom. So by default, UW Zoom accounts have the ability to turn on automatically generated captions for your meeting or webinar. And you can check these settings or make changes when you're logged into your Zoom account through a web browser. So to turn on live captions in a Zoom meeting, you can look at the Zoom toolbar and the live transcript button will be visible to you as the meeting host. Now, if you click on that icon, then this little pop-up window appears as shown on this slide. The slide is a screenshot of my Zoom instance and I've clicked on a little CC live transcript button there. And then once you've clicked that, then you can select the enable button under live transcription and also check the checkbox next to allow participant to request live transcriptions if you want to include that as well. So incidentally, these are the exact same steps that you would take if somebody has requested an accommodation and you need to assign access for a human captioner. If you have a third-party human captioner, it's usually somebody you may have requested through DRS or DSO to help caption the session. You'll select the button there that says copy API token and that copies the token to the host or your clipboard and then you can paste it into the third-party closed captioning tool and then that will allow for the cart captioning to appear within your Zoom instance. And we have captions enabled for this webinar and you're welcome to turn them on by clicking on the CC icon in the Zoom toolbar and you can also view the transcript at the same time and the transcript pops out on the right-hand side and also helps identify who is speaking. Okay, so I'm going to switch to my web browser for the next few items and provide a demonstration of editing captions in your Zoom cloud recording. So once your meeting or webinar has ended, it's gonna take some time to save the recording to the cloud and when that process is complete, you're going to receive an email from Zoom with a link to that recording. Now you can access that recording just by clicking on the link within the email and that will prompt you to log into your Zoom account in a web browser and that will take you directly to the cloud recording of that instance or you can access it by going to the recordings tab within your Zoom instance. And so that's what I have up here on my screen. So a couple of different things I want to point out here. This link here down at the bottom where it says audio transcript, this is a live link. And so I mentioned that it takes a long time for video to save to the cloud. It takes even longer for the transcripts to save to the cloud. So if you get your first email from Zoom saying that your recording is complete, that's probably just gonna be the video recording and not the transcript. So you need to wait a little bit longer until you get a second email from Zoom that states that the audio transcript is complete. If you click too early, then this link here will not be live and you won't actually have any captions to edit. So but once that's complete, then you can go ahead and click on your recording and I'm gonna open it up in a new tab here. And it automatically starts playing which is always distracting to me. But this is the caption editor within the Zoom instance. And you'll notice that the video takes the prominent position here in the center of the window. And then the transcript appears on the right hand side. So usually what I do is I can click on this little CC button here. It says show subtitles. And then you can see that the captions actually appearing here. Now I already started to work on this particular video. And it's pretty easy to edit the text of the transcript just by hovering your mouse over the word balloons. And you'll notice when I do that this little pencil icon appears and when I hover my mouse over that, it says edit. I'm gonna make this a little bigger so you guys can see that. Okay, great. So when I click on that, then that allows me to make an edit within this word balloon. So I'm gonna go ahead and make changes here. Okay, so I've made changes to my transcript and I have two different options. I can either select the checkbox and that will save my changes or I can click the X and that will reject the changes and it will go back to the original text. But when I click on this checkbox here, I want you guys to pay attention to something that will happen. Up in the upper middle of the screen here there'll be some kind of a notification that says that the transcript has been updated. So I'm gonna go ahead and click on that so you can see that. So I saved that and then you can see here a transcript has been updated. So when you play this back, when you play the edited transcript back in your instance of Zoom, those updates might not appear instantaneously, but if you share the cloud link of the recording, that will have the updated captions with any changes that you have made to the caption. So, and it does take a little while, maybe take a day or two for your instance to update. So the captions on your instance will match the transcript that you have edited. It takes a little while for that to happen, but it won't happen in real time in your instance, but Zoom recording will reflect those changes once you have made them. Okay, so that's pretty much it for Zoom. So I'm gonna switch platforms and I'm gonna talk a little bit about Panoptil. This is another video recording option that we have at the University of Washington. And in order to enable automatic captioning within your Panoptil videos, you could add up pretty easily. You'll notice that if I hover my mouse over the video, I get these little buttons set up here. I can select the settings button and that opens this up in a different kind of a preview. And I have another left-hand navigation menu here. If I select captions, then I have this request captions option. It's a drop-down option. And if I select the drop-down, you'll see the very first option there is automatic machine captions and that's what you will be selecting. Now, I have a bunch of different options down here. I do a lot of sending videos to be captioned by other third-party caption services. And so you won't see this huge list of different services or different, essentially, these are budget numbers that will pay for the captioning. And that's done by human captioner. So they actually go in and the humans will caption the videos for you, but that's a four-fee process. So, but the automatic machine captions will be free and you can select that. And then, again, it takes a little while for those captions to appear in your Panopto instance. But when they do, you can go back to your video and then you can make any necessary changes within the caption editor in Panopto. So I'm gonna go ahead and open up this video instance here, which also automatically starts playing, which is very distracting. So here we have the Panopto caption editor. Actually, it's a video editor where I can also work on captions as well. So I've got the main video, which is the center of my screen here. And then I have my slides down at the bottom. On this left-hand navigation, I'm gonna go ahead and set captions. And when I do that, then you can actually see the transcript with timestamp, but I'm not really able to make any changes yet. So in order to make changes, I need to select this edit tool, this little pencil tool up in the upper right-hand corner. And when I select that, and my view is gonna change here, I'm gonna select captions again. And this time I can go ahead and click into these little word bubbles and make my changes. Okay, great. So once you have made all your changes to your caption file, then you can click apply, and then that will save all the changes to the cloud. And then your caption file will be updated. An interesting thing that I noticed, if you do not hit the apply button, but you have made changes to your transcript and you edit, or you exit out of the Panopto caption editor, it will still save whatever changes you have made in your transcript as a draft, but it doesn't publish it to the cloud yet. So which is kind of nice. So if you have a long video and you need to come back to it, but you don't want to apply the changes to the entire caption file, you can make the changes, exit out of your browser, come back. Your changes will still be there. Once everything is done, you can go ahead and hit apply, then that will make the changes to your cloud recording of your caption file. And then everything will be revealed from there. Okay, so let's go ahead and move on to YouTube. So in YouTube, when you upload your videos to YouTube Studio, captions are automatically generated using ASR. You don't have to do anything, that just automatically happens once you upload your videos to the YouTube platform. Now, just like in Zoom, it takes a while for the video itself to upload, and then it takes another while for the automatic captions to be uploaded as well. So I'm gonna go ahead and take a look here at a video file that I have in my instance here. And when you first upload your video and you're wanting to add captions, you'll notice this live link here that says duplicate and edit. And when you first upload your file, this will actually say add ADD. And if you click on that, then that will give you the option to either upload a transcript that has already been created or you can start typing a caption, the captions for that video. So it hasn't actually uploaded the automatic captions yet. It will just give you the opportunity to include them on your own or start manually typing them. But this was uploaded on September 29th, 2021. So it's been here a while. So my captions, my automatic captions should already be there. So I'm gonna go ahead and select this duplicate and edit. And this is the caption editor within YouTube Studio. You notice that video shows, appears here on the right-hand side. And then we have the captions transcript, rather that appears on the left-hand side. Now I wanna select this link here that says edit timings. And when I select that, then you can see that the transcripts turns into more word bubbles with timestamps of when these words will appear on the video player. Something else that you may notice, I can make this bigger. Oops, yeah, that's not so great. Okay, something else that you may notice is that there are word blocks that appear in the timeline. And right below the word blocks is a waveform of the audio of the text that is being spoken. So if you need to make any adjustments to the timestamps of when the captions appear on screen, you can really easily do that just by sliding these text blocks around. And then it's super easy also to make any changes to the text in the caption editor for YouTube. You also have the option if you're working on longer videos to save drafts. And then once you have completed editing your entire transcript, then you want to select the publish button and then that will send your live captions, sorry, send your updated captions to the cloud version of your YouTube video. Okay, so that's pretty much it for the caption editors. But sometimes automated captions just aren't that great. Maybe a lot of technical terms were used during the presentation and they weren't transcribed accurately. Or maybe there were several speakers in a webinar or a meeting and automatic captions aren't identifying who's speaking at the right time. It could be other factors, maybe a noisy environment, all kinds of factors. So I wanted to include this slide as a friendly reminder about the UW IT captioning service, accessible technology services manages this project and will caption UW video presentations without any charge. But there is an application and applications are reviewed by ATS staff. And the videos that we caption do have a criteria. So they need to be highly visible, forward facing, high impact, usually multiple use, and maybe strategic videos as well. So we have quite a bit of funding, especially for captioning PANOPTO videos for this. So I highly encourage folks to apply for this service. And I've included a link for the service here on the slide as well. For other videos, maybe outside of PANOPTO Zoom or YouTube, although those could be considered as well. You might want to consider using the state contract with three-play media. If you don't want to use automatic captions and you don't have the time to edit those captions yourself, you can pay three-play media and it's $1.95 a minute for our contract with three-play media. So this slide is just a screenshot of from a third-party captioning service. And it lists a bunch of different file types. These are all standard file types used by popular media players. But of course, there are a bunch of different ones. Facebook, SRT, SMI, SRT seems to be a very popular file name or file type rather. But the purpose of this slide is for you to know that it's important to know what file format your video player supports, as that really determines the file type that you're gonna choose. And then this slide just shows the formats. And in this file format of a caption file, in this case, this is a web VTT file, caption file. And it's just simple text with timestamps and colons and whatnot. These caption files can easily be edited in something such as Notepad++. You don't really need a caption editor to edit these files, but it sure does make things a lot simpler. So it's something that keeps in mind if you're using a text editor to edit your caption files, it's really easy to make a mistake which could throw off your timing and your captions for the rest of the video. So just something to be aware of. And which videos are the highest priority when it comes to deciding which videos should you caption? Well, certainly videos that are required viewing for individuals who need an accommodation would be a high priority for captioning, but also videos that are likely to be required viewing for individuals who need an accommodation should also be considered. So thinking what their needs may be. Other videos to consider include ones that are popular, viewed a lot, videos that are relatively new and captioning could be part of your workflow in videos that provide critical content. So how do you prioritize your videos for captioning? Well, I'm gonna actually hand it over to Terrell at this point and he is going to talk to you more about that. Thanks, KB. And before we do switch speakers here, I just wonder if anybody has any questions about captioning because I'm gonna talk a little bit about a tool that we have to help prioritize captioning efforts, but mostly I'm gonna shift gears and start talking about audio description. So now would be the time if you have any caption related questions. There was one in chat, by the way, that questioning whether automated captioning is enabled within Zoom if it is enabled by a host, whether all participants see captions right away or do they need to click the CC button? And I tend to always be host or co-host in meetings where captions are provided. And so I don't know if I have the answer to that, but there is some discussion in the chat about that. I know that I think that the user has to click the CC button to see and that there is a notice that gets sent out to everybody saying live transcript captions are available. And it sounds like Andrea is confirming that that is the case. Any other questions about captions for you move on to other stuff? And feel free to either raise your hand or type in chat or we're a pretty small group, so feel free to just unmute and talk if you prefer that. If not, then I will go ahead and share my screen. I actually have a slide that shows this tool, YouTube Caption Auditor, YTCA is a tool that we developed. It's free, it's open source, and this is the report or set of reports that it generates. And so we actually have a site hosted on our department's server that you all have access to. If you are a YouTube channel owner and you want to participate in this to use this tool to see how YouTube channels at the UW are performing in terms of their captioning efforts, but primarily to use this tool to prioritize your own captioning efforts, then this is a really great tool for that. It's protected by UW NetID, and so basically you just have to ask me and I can give you access to it. But I encourage anybody who has some ownership responsibilities for your YouTube channel to use this tool and take advantage of this. As it says here, we have 88 YouTube channels. These are known channels. I suspect there are many more out there, and some of these are kind of seem to be dormant if they haven't been updated in a while. But one thing we can do is we can sort this table. It shows us all of those panels, how many videos they have, when their latest video was uploaded, how many of the videos are captioned and what percentage that is, as well as a few other things which are customizable. You can select what fields are shown in this table. But I particularly am interested in how are folks doing on their captioning efforts? And this is color-coded, and so you can see the rows that are red are zero captions. So those are channels that need to kind of get their act together. They haven't started their captioning yet. Whereas green rows indicate a channel that has captioned 50% or more. And so if we focus on the positive and sort this in descending order, then we see that there are quite a few green rows. These are channels again that have captioning 50% or more. Several are over 80%, three are over 90% and one is 100%. UW School of Public Health has done a great job. With 176 videos, they've captioned all of those and we can click on any channel to see what's been happening with that. And we actually see over time kind of how this has progressed. They actually had to attain 100% in 2019 and maintain that in 2020, but then they uploaded a few videos that were not initially captioned, but then recently they caught up and have captioned those videos as well. This is also an accessible chart. So you can check out what accessible data visualizations look like. It has a sonified graph button so you can listen to this graph, which is pretty cool. This is using high charts. So that's off topic, but this is a really cool app for taking a look at accessible data visualization. Further down on the page, you see a list of all 176 videos and this too can be sorted however you like. So it's not gonna be very helpful in this case because everything's captioned. But if we go back and we look for a slightly less perfect example, like I'm gonna pick on the iSchool, for example. They're at 48%, so just shy of 50%. We can kind of see that they've never attained 100%. They got a little bit closer in 2021, but then they got a little bit farther apart. But the way that this is really helpful for prioritization is you can sort, I think probably the most logical way to sort would be to sort by views. And then you get the most popular videos on your channel emerging to the top. And as we see here, what initially looked pretty bad, they had a lot of nos for caption. They actually have done a pretty good job. If we base, we look at the highest priority being those videos that are viewed most frequently, then they have captioned most of their high traffic videos. The number one exception with 24,000 views is the Bachelor of Science in Informatics program overview, which sounds like a pretty important video. And so that, this tool then allows that to surface. And they can say, oh, that really needs to be captioned. It's a high priority video that slipped through the cracks. You can also sort by date. So you get the most recent videos emerging to the top. And in that case, the most recent video, although it hasn't been viewed a whole lot, it is their most recent video. It's only a four minute video, but it has not been captioned. And so this tool just by seeing all the videos in a context where YouTube doesn't really present them anywhere in a way that's this easy to just kind of look through and see what's captioned, what's not sorted a few different ways. And then take it from there. So this also can be a really useful tool for prioritizing your audio description. And so that's what I wanna turn my attention to now. And I'm actually gonna reshare because I forgot to check that little share sound button. And I've got some sound to share. So what is audio description? And I gave you talked a little bit about this, but essentially, if you think about captions being a solution for people that are unable to hear the video's audio, audio description is a solution for people who are unable to see the video's video. So they may be able to listen to the audio. And in some cases, they get most or even all of the content when they do that. But if there's any content that is communicated by a video alone and listening to the video is not sufficient to get that information, then that's a barrier and that needs to be remedied somehow. So audio description provides a solution to that. And this is a solution post-production. So you've already created a video, you didn't add in that narration that would have made that accessible. Or if it's a lecture, the lecturer did not describe things that were happening visually. Maybe he didn't verbalize the content that's being shown on slides or gave a demonstration and didn't describe what they were doing. Those are all best practices, but if in the end you have a video, that didn't happen. And so there's content that's inaccessible, then it needs to be added in after the fact. And so audio description is a way to do that. It is known by other names, sometimes a descriptive video or just description by itself. And in various other terms are used to describe this, but audio description seems to be the most common that has kind of emerged. There's also a term you should know about extended audio description, which is when audio description doesn't fit, if you've got content, audio content constantly, somebody's always talking or there's always dialogue. And if you're gonna narrate something or describe something that's happening visually, there's no place to squeeze in that narration. Then extended audio description means that the video pauses at that moment while a description happens, and then it resumes playback after the description is over. So that's kind of a common technique within audio description, extended audio description sometimes is necessary. So about audio description, we wanna talk about how to prioritize, which is basically kind of similar to prioritizing your captioning efforts, but with some differences and how to describe, we're gonna talk about three different approaches and avoiding the need for audio description altogether. So first of all, prioritization, the same sort of strategies for captioning and for audio description apply. So look at your audience demographics, who are you expecting to use this? Is this a video where you expect there to be people in the audience who are without site, then audio description is gonna be a priority without hearing, then captioning is gonna be a priority. So that sort of thing, but also then look at traffic, look at publication date, where ideally everything we produce now, today should be accessible out of the box. And we can gradually, the prioritization comes in when we're looking at our legacy stuff and wanting to go back and making all of that accessible. And then we'd have to sort of prioritize and gradually do that. And if videos are on YouTube, then use this YTCA. And again, just reach out to me if you'd like access to that. The other thing that is unique about audio description as opposed to captioning generally is that all videos don't need description. It really depends on the nature of the content. And if you watch the video with your eyes closed, the question is, can you access everything? You get all of the important ideas or is there information that you miss? It's a high priority. If nothing makes sense with audio alone, it's a medium priority. If the video is generally understandable, but there are some critical details that are lost and it's a low priority. If some information is lost, but it really isn't critical information. Somebody gets the general idea of this video just by listening to it. So I wanted to share a few videos just to kind of give you a sense of this and think about those priorities, high, medium, and low and ask what priority is this. So we'll start with a UW video together we will. And I think you're probably already getting the idea with this one, but I encourage you to actually close your eyes as you're watching this video and just see what you come away with. So pretty powerful video, right? I mean, the music itself is powerful, but if you can't read that on-screen text, then you're gonna come away with probably a very different impression of what this video is all about. Then if you can read that on-screen text or see the visuals. Here's another example. This is the best of UW 2016 video. There have actually been audio descriptions added on more recent years too, but this is an example I've been using since 2016 and I like it as an example. So I'm gonna continue using it, but it's kind of similar to the last one. Let's watch a little bit. So obviously both of these are high priority, right? You don't get any content at all. You get a nice musical score, but you have this, both of these videos really move me and make me proud to be a Husky, but there's no reason for somebody to be proud to be a Husky watching this video if they can't see the content. So this is actually one, it's a good example of how to deliver audio description. There is, this is on the president's blog back in 2016. This is that end of year video that I think every year gets produced, kind of highlighting all the great things that we accomplished in the past year. And in this case, right next to the YouTube, the embedded YouTube player in the president's blog, it says video is also available with audio description. And so you can then click the audio description link and that pulls up the described version on YouTube. So let's see what this is like with audio description. Words appear, hashtag best of UW 2016, the Nobel medal next to David J. Tholes. 2016 Nobel Prize in Physics with President Obama, Mary Claire King, National Medal of Science, UW and Microsoft break record for DNA data storage, a collage of photos inaugural Husky 100. So obviously that's a much more accessible video now and anyone can be proud to be a Husky whether they can see what's going on visually or not. So let's move on to another example as we're thinking through priorities. This is a video that we produced called IT Accessibility, What Campus Leaders Have to Say. Let's check out a little bit of this. We are committed to the notion that everyone should have an opportunity to participate in higher education, whether it be from the learning perspective or the research perspective or an opportunity to work here at this institution. We benefit from that because we get to enjoy the talents and the skills of those people who come in and also their perspective, which in many cases will be different from the perspective of others on campus. So accessibility becomes a very important value of the university. So what do you think? Is that a high priority, medium priority, low priority? And in the interest of time, I won't ask you to answer that, but think about that in your own mind. I would classify that as a medium priority because it really is just talking heads. Everything that's said is, you get that audibly. But the key missing piece here is what, not what is said, but who's saying it? That was Michael K. Young. Obviously this is an old video, not the president of the University of Washington anymore. But this video features a bunch of university presidents and CIOs and other IT leaders, all talking about the importance of accessibility and none of them are introduced within the audio track. If they don't introduce themselves and there's no narration that says who they are. And so it could just be anybody off the street talking. And obviously you want to know who they are and what their affiliation is so that they have credibility. So this needs to be described then in order to have to be accessible. But it's a lesser priority than the previous examples we looked at which were entirely inaccessible without description. Here's another example, another video that we produced. Let's check out a little bit of this. My name is Cheryl Burgstahler and I direct accessible technology services at the University of Washington. And through our Access Technology Center and other services, we're making sure that the IT that we develop, procure and use at the University of Washington is accessible to all of our faculty, students, staff and visitors. So I'll stop there, we could go on. But every person who speaks in this video, again, it's talking heads, kind of documentary style video, everybody introduces themselves and states their affiliation. So Access is built in, in this case, it's a low priority or actually a zero priority need for audio description because everything is communicated through the audio track. And so this is really kind of the best case scenario for upfront pre-production, kind of designing and scripting the video. You think about integrating that in so you don't have to do audio description afterwards. But if in the end you do have to do audio description, then there are various ways to do that. And so I wanna talk about three of those. One is to hire a traditional audio description service provider. That's what happened in the best of UW video that we looked at, that was sent out to an audio description house. And we'll talk a little bit more about what that means and what services they provide. Second, you could hire a captioning vendor. So it's actually the same as hiring a traditional audio description service provider, but it is companies that have traditionally been in the captioning space, like three-play media and automatic sync. They are now doing audio description in addition to captioning. They do it a little bit differently, but that can be an option as well. Or the third option is to do it yourself using a time text file. So the first approach, hire a traditional description provider. We actually, a few years ago, so this is a little bit dated, but we started with a directory of audio description service providers that is published by the American Council of the Blind. And they have about a hundred service providers listed. And many of those are focused on live production description theater events and that sort of thing. And others, at least at the time that we did this, were very sort of local in the services that they provided. And they didn't work a whole lot on a more national or global scope and didn't work a lot with, or at all with post-production video, making video accessible with audio description. So we kind of narrowed the scope down to about a dozen or maybe a couple dozen providers and then we sent surveys out to all of them to kind of get a sense for how much do they charge, what is their turnaround time and various other things and tried to get a sense of who would be a good fit for us in higher education, knowing the sort of videos we have and what our needs are like. And from all of that, we narrowed the list down to seven providers. And actually as of today, that's six because one of those bought out the other, I just learned. FreePlay Media has now acquired CaptionMax. So CaptionMax is no longer on the table. But anyway, on our making video accessible page, there's a list of providers. So it's a short list. What happens when you send to a traditional audio description provider is they will script the audio description, they then do the narration with professional voiceover talent. They professionally mix it in. So they lower the volumes as they're speaking, they raise the volumes afterward and so forth and do that kind of seamlessly. So it all sounds good. And then the typical deliverable from them is an audio described version of the video. So then you've got one version with description, one version without, and you cross reference the two, which is what happened on the president's blog. There was that link to the audio described version. The typical price range for that service is 10 to $15 a minute. That does depend on complexity and extended description where the video has to be paused does cost a little bit more generally. So here again, it's just a screenshot of that. Best of you to 2016 video with the link to the audio described version. So that's how that would be delivered if you did it that way. The second approach is to hire a captioning vendor. Again, free play media does this, automatic sync does this. We have the state contract with free play media for captioning. And so basically for them, it's an add-on service. So you get a video captioned and then you additionally check the box that says, I also want audio description with this. It's 750 per minute as their standard rates and $11 a minute for extended. So a little bit cheaper for the description itself, although you have to get a caption. That's a requirement. Even if you've already gotten a caption, you still have to get a caption. So the lower cost is a little bit nebulous because you do have that upfront cost for captioning as well. The reason that they have that requirement, by the way, is that they're using a semi-automated process for figuring out where the description will fit. And they use the data that they produce in the captioning process in order to inform that. And so, I've lobbied to get the two services separated, but they explained to me that that's why they currently are connected and necessarily connected. The output, this is a unique thing too, is that the output uses synthesized speech. So it's not human voiceover talent. It is a speech synthesizer. And while that, for a dramatic production, consumers prefer research shows. They prefer a human narrator, but that preference kind of goes away when we're talking about academic content. They just want description of any sort in order to access their academic content. A typical deliverable, again, is an audio described version of the video. So you get two versions, one with description, one without, but there are lots of other choices too, about a dozen different choices of ways that you can get this description. Here's an example of the three-play media dashboard where we've uploaded a file to be captioned, and then these are optional services that we can add. So we check the box, it says audio description, and then we choose it as a standard description, as an extended description. Maybe we don't know, and so we pick the choose for me box and let them decide, in which case we have to have, it's a flexible spending plan then. It may cost us 750 per minute or it may cost us 11 depending on what they choose. And then we've also got higher prices for expedited or rushed jobs. And lots of options actually from three-play on audio description, so you can choose different speakers voices, since this is all synthesized. You can choose the English speaking rate, slow, medium, or fast, and in making those choices, there are samples that you can listen to to figure out what the best fit is. And lots of choices for output. The third option is to use a WebVTT file. This is what Gaby described for captioning. It's actually the exact same kind of file. And instead of captioning text, that file would include description text. So basically you've got a start time and an end time with some specific syntax. You have to write it out in a precise way. And then the content in between that start time and end time is the text that you want to be verbalized at that moment. And so if we take our IT accessibility campus leaders video, I've actually updated this. So it's got Annamarie Kasse listed as the president instead of Michael Young. But it's the name of the person, their affiliation for each time the description needs to happen. So this is the kind of thing that's really easy to do. You can do it in any text editor. Open up Notepad, type in your description text. Make sure that you've got the WebVTT syntax rights. And grab the start time and end time just from your media player. Really easy to do this on your own. Doesn't cost a thing other than a little bit of time. One minute, two minutes and you've got your audio description. And it then, it is super easy. It then, this is actually built in to the HTML5 specification. And so with this tag, the track tag with kind equals descriptions pointing to the VTT file, it, this is how HTML5 is sort of, this is how descriptions are expected to be delivered or HTML has built in a way to deliver descriptions. You only need one video then and extended audio description can be automatic. It can be built in to the playback so that it just automatically pauses rather than having to create an entirely separate video that does that pausing. So the problem is it's not supported at all. None of the browsers built in media players support it. So the only place you have support for this is enable player which is a media player that we developed. It is free and it's open source and it's out there. There's a WordPress plugin, there's a Drupal module although both of those are in early stages of development. So it's kind of a work in progress but it does make it possible to deliver description using this method and to write your own description. The one caveat though is that if it's more than a kind of a low or maybe a medium priority description job where you really need to describe things more eloquently like in those first two examples we saw, that's something that probably should be sent out to the experts because audio description is an art. People spend a lot of years learning to do this, learning what words to say to describe something and that's not something that people who have not been sufficiently trained should be really dabbling with. So it's more for, I've just got this one piece of content that is visual, it's not, there's no audio alternative. I need to describe just that. You can do it really quickly with this approach. I should also point out, actually this is not in my slides but I gave you during the Panopto demo reminded me that this is now possible in Panopto that in the same place where you could click on captions you can also click on audio descriptions and at any point in the video where there's something that needs to be described you can type in a description and then the user can turn it on on the player with the audio description is an enable audio descriptions and an enable captions button. Let me just, at the risk of embarrassing myself with this homemade video or I was just experimenting I'm gonna play just a little bit to show you what this is like. Terrell presenting from his home office with a backdrop of Chinese art. And then we've got a lot of just kind of messing around until about 30 seconds. Terrell moved his head around in circles while the camera tries to maintain focus. It's four o'clock. So that's what that is all about. And we are right at four o'clock and my last slide here is just a bunch of links to various things about both captioning and audio description. And these slides are gonna be available afterwards on the webinars archive page. So I'll leave it at that and I know we want to end on time but just wanna see if there are any questions before we adjourn. There was one question in the chat and it's from Morgan and it says is there a requirement like ADA compliance to caption all videos or provide other accommodations? Are there different standards for internal videos? And then they actually found the link and posted it. It's from three-plane media. And then they also posted a quote saying public entities including state and local governments in both internal and external video communication. But I'm wondering if you have anything else you might want to add to that. Yeah, just that, I mean, we are required by state policy 188 and by our own internal policy it kind of echoes that to comply with the W3C's web content accessibility guidelines 2.1, level 2A. And that does include a requirement that videos be captioned and audio described. Audio description is actually built into that as well. And so anything that's public should be a priority for being made accessible. And that's something we really need to be focusing on. We're not doing very much audio description at all and captioning, although we're seeing a lot more of that, that's fallen pretty short too. For accommodations, if you got things that are behind passwords, course materials and so forth that students need to access, then disability resources for students will step in and they will make those materials accessible. But it's more of the public things that we really need to be paying attention to and prioritizing that things that get a lot of traffic or things that are new and might get a lot of traffic those all need to be made accessible. And Harvard is a really high profile case. Harvard and MIT were both sued by the National Association of the Deaf because they had so much really good content that was publicly available and was not accessible. And so that they've really had to do a lot of work to catch up on that. All the details. Hi, Ter, could you also put the link for the accessible visualization in the chat? Yeah, that was, it's behind the UW-Net ID. So, but that was part of the YTCA application. But I actually am working on making a public version of that, so everybody can access it. But I'll share that with you, Sushil. Yeah, and also if you'll let, I'll send you a message if you'll let me know when it's permanent. I want to use it in a paper. I'm working on a paper on Miro. So it'll be a good example from our own school. Okay, excellent. Yeah, well, let's stay in touch. If I can comment that the YTCA tool is extremely helpful in figuring out what you need what the high priority videos are. And for us in the School of Public Health, we had something like 45 COVID webinars one hour. It says there's a lot of volume and we're down to staff members. So it was really a challenge to get back. But we essentially built it into our workflow. So now we should have 100% of our videos captioned. It just, there's a little bit of lag and when those things are posted. But thank you so much for that tool. It's super helpful. And thank you and congratulations on attaining 100%. That's quite a milestone with so many videos in particular. All right, well, thanks everybody for coming today. I'm really, really great to have you all. And stay tuned for more accessible tech webinars this time every month.