 Hello and good afternoon everyone. Sorry for the technical delays there. It's an absolute pleasure for me to introduce Sandra Partington, my ex-colliach from City University of London. Over to you Sandra. Hi everyone and thanks very much for having me… Oops, yes… oh, slide one please. Sorry about that, never move rooms don't move computers. So, thanks for joining me. Today I am talking about part of our digital accessibility project and the slot that I'm talking about, the theme or slice is making multimedia digitally accessible. You will hear more about our project another time and what I wanted to run you through was how we've been, what we've been doing, our activities and kind of where we've got to for attempting this kind of thing at scale. So, we've been trying out looking at ultimate speech recognition without the student pilot. We've been doing a baseline review of all our video and media platforms of which obviously we have more than we started with a year ago blossoming due to COVID shall we say and we've had a look at some vendors and we've worked closer I think than ever with our disability and neurodiversity team. So, I just wanted to start with where we were in September 2020. Now, we had what we called an approach, we thought we can't go for a policy, we need to test the water first and of course that was when the kind of we needed to let the new regulations were coming in. So, what we did, we kind of separated our approach and our advice and guidance to staff into pre-recorded media and then we separated that from recording of synchronous online, live teaching in groups and we kind of took lecture capture aside because we weren't really getting a lot of people coming in and that seemed to be carrying on as we went into the next year of the COVID. So, just a little review of what we actually did in that year. I was thinking, well, yes, we did tell stuff about captioning and I did feel really bad telling them about captioning and their duties and responsibilities because they'd really just got their heads around GDPR and then COVID and it was like, oh, DME, what can I ask these poor souls to do? Some people tried to caption all their work, sort of 24 hours and then soon had to give that up. Anyway, what we did do, we put in that student-led student-run captioning service to at least try and correct a lot of the captions. We were making an awful lot of screencasts or narration over PowerPoint. There was a lot of those that were to be watched before the group sessions. They were basically prerecorded like distance learning materials and they we thought will target them for correction. We did switch on wherever we could, the automated speech recognition captions in like Teams, Zoom, Kaltura and of course then later on we actually switched on where we could, the live ASR for live teaching as well, so we wandered into that area. We, as I say, we left leptocaptacles, it really went very quiet and oh, we didn't quite get round to the disproportionate burden but I would say at this point we now have a much better idea about it and how we'll go about it in our next year. So next slide please, aren't I getting the executive treatment? So we, as I say, we did run a student pilot. We had about eight students for, I think they were supposed to be for three months and then they carried on for a, that's the one, lovely, yeah, yeah, up, up. And they carried on for, actually they just kept carrying on except we had to swap them over a little bit. So they loved captioning, they were very good at it. They were pretty good at all the subjects. There was a lot of owing and aring about whether we had to have someone from each subject area but in the end, you know, we took who applied and they were from a range of subjects but what they found their best tip was as long as they got the same voice and the same subject area they could really fine tune the any accent or how the person spoke and that kind of subject area like shipping law got very good at that. What they did, so it wasn't essential to match the subject but it's good to have a range. What they did discover the shortcomings of automated speech recognition in one of our systems now it does come in different flavours so I'll tell you a little bit about that later. They also discovered a bit of an issue in how we'd set up our cultural media platform because we'd missed a few things off our settings and they really did help us because we basically created a little captioning factory and we had 20 staff taking part and we basically modeled it and from that we gathered some really useful requirements and also kind of just a feel for when in the year this would be busy, how much, who's really keen on it, some staff were like oh it's too complicated, I don't want to apply, sorry about that. So we had all of these things going on so the staff were excellent, there was about 20 staff took part, they gave us good feedback, they were very concerned, one thing they were concerned about was that their accents with international staff were affecting the accuracy of the ASR and interestingly we find out it wasn't, it was equally jumbling things up whether you had an accent or whether you were crystal clear English. So actually although it was important to correct the captions, it wasn't actually the accent that was doing the problems so again they helped us model what do they want, how quickly do they need to turn around what times of year. Now so we did begin as the pilot carried on and on and on, we did begin to look at a range of suppliers and their products and we've got a much better idea of what we'd be asking for, I'll tell you a little bit about that later. So yeah, my little automated speech fund, I'm not going to tell you the 11 crazy things that it does but it was amazing having all those students' eyes on this topic because it's all you know my god I'm not used this chrome I've just opened it up on another machine and sorry it's blipping away. So one of my list of 11 crazy things of ASR was that the students blessed them, they didn't tell me at the time, they only told me this in the focus group at the end, it would take the same word in the same recording you know a repetitive word about a particular topic said by the same speaker and then it would change it to a number of different words not the same word, a number of different words and so they couldn't use find and replace in the editor to try and grab it so that was a little bit annoying and that's just one of the things however what it did show up to us is that automated speech recognition usually comes with you know it's it's powering your live captions, it's also filling up your interactive transcript so it's giving a sort of searchable version of that video, a highlight that follows the speaker, a downloadable transcript not taking tools and also you know you can see that once that was corrected there it sits there in the player with the transcript, someone can download it or watch it and what a brilliant brilliant thing that is but the captions themselves can be inaccurate. Okay next slide please um I guess the the other thing that came out was that trying to get all our platforms the same quite early on we realised we were missing some things our our big mistake was we somehow didn't have the interactive transcript on our main media player and we didn't have the ability to change the caption size and text contrast and we were like oh no where did that go somewhere in the setting so we had to get that set back up and we did a sort of baseline and every time I think I've got them all behaving the same a new feature comes in, breakout rooms in teams or something, live captions here and there and and I have to kind of go back through them again but we'll be carrying that on because it's basically we'd like them to work as easy as possible so that staff don't have to remember oh I'm in zoom I need to put the live captions on so somebody else can put them on so we've got a lot of baseline comparisons for that so on to our um the next slide is our request for uh I just thought I'd show you this because um we we're going out for just a middle-sized tender it's not the whole institution tender so it's it's a smaller amount of money so you can actually get you know we're going to get about five quotes and I just thought I'd show you what we came up with to send to the different vendors you know this is what I need your quote to reflect and I don't think I could have done this this type of information if we haven't run the student pilot you know um different time zones uh you know short quick quick and then a little bit longer because we've got plenty of time um they seem to have sort of dashboard so I could actually pile stuff in there from my different platforms and get it fixed quickly there some of them are integrating with some systems and some others not so it's like oh well tell me what you've got um and these were quite useful and also how many admin licenses for if we've got a team of people who need to throw things in and pick things out so that was quite I don't think I could have come up with that little piece and obviously that's the beginnings of a an institutional requirements um so let's move on to the next one this is our updated approach um obviously we have a kind of guidance and we take this through committees and things but um the kind of what we've changed to is um well we're still going on the pre-recorded um the other big change is that on the uh live classroom if you would we're including lecture capture now because we're likely to have a mix of on-site and online and um here's another thing where we we sort of we hadn't quite got ourselves organised because our lecture capture has only an allowance for ASR um to come on which is brilliant it it's fabulous we quite like it it's it's not bad but it it we'd be through it in three months if we're back to normal um so we're going to have to sort that one out so we brought that one in the mix we're also working a lot closer with our disability and neurodiversity team because they have actually had a big input into our new it's not the lecture capture policy anymore it's um recording our teaching so they are making more stronger statements about having things recorded um i'm going to skip on uh to the next slide um so the two things we're going to try we're going to try a an external company and a kind of fast turnaround slightly slower turnaround and we've put you know some imaginary money next to that and all the academics have said well we'll be through that in no time but i'm just going to go for it so we got that one the other one we've got is the something that will follow the learner follow them around and it'll put captions wherever they are it's a kind of overlay now that could come in handy um we're going to start that off with students with a disability on neurodiverse we'll pilot it with some people see what they think before we sort of can say it's a useful disability assistive technology we'll try it out with that one could fill in some gaps say if a student is watching a stream from echo it could give them captions on that something like that could be handy we're also going to have to have a look at our lecture capture allowance we're really going to have to help um we're going to have to pick up our project it's going to have to pick up this disproportion of burden but i think we know a bit better now what to ask for and where and how long things might be and we're also going to have to help staff and create new services so for instance you know they know how to book lecture capture but do they know how to book then captions or something um so what i might do is whiz past the next two slides you can see them oh look at that looks fabulous it's not not quite finished this is my question um as i've been doing this i've been working with the my colleagues in the disability team and the academics the students the people using are just i've been pulled apart like well if we have this for a disabled student that could be good for everyone in their cohort why not just do that or if i knew where the disabled students or students with neurodiversity were i could channel resources that way i don't need to know names it's i just you know i can if i knew then could we push things around and i just thought you know this is that sort of you know i know there's data there i think it could work but i haven't really worked out how to find it and use it and i'm i'm a bit worried it'll take me so long that i'll just plodd through the kind of asking people to report in when they need something so i'm going to shush now because i'm going to get kicked off in two minutes um so there we go oh i can see thanks Sandra um we've got a comment i think from someone's name we've had similar conversations and no solutions as yet either oh that makes me feel better yeah i think something probably i need to stop trying to work it out in my own head because it kind of overheats but it's almost like i'm i'm asking people for data that they don't usually use it for that reason you know they've got their data beautifully but they doesn't connect about how fazzana's going to go is that city is it anyway um so okay i think maybe i'll i can bring that conversation to the uh with there's the disk digital accessibility group and uh maybe we can do that there you can obviously continue the conversation on uh discord that does look like unfortunately we are out of time now but thank you very much thank you brilliant and insightful presentation it was really interesting to see how you've been working out this problem at city and so it would be great if everyone in the chat could use their best emoji to thank Sandra for her presentation oh yeah and i was going to start with a gentle cup of tea before i realised that in my ancient alt mug 2009 wow and i hope alt will come back to london soon because we were so looking forward to seeing everybody okay kill it kill it