 Another challenging issue in SDH when it comes to sound descriptions and something that beginners of the struggle with is deciding which sounds to include and what sounds are more plodper, which sounds are more plus pertinent and should be prioritized. And this too many subtitle seems a bit subjective. How do you balance providing enough detail in captions while also ensuring readability and not overloading viewers with too much information? We have seen some devil hard of hearing viewers complaining that they get lots of sound descriptions which are redundant for them and perhaps distract them from what's going on on the screen. Like I say and that I've mentioned this a few times now today because it's something I'm always talking about and preaching about. It's especially in my beginning years of my career and working in SDH. I found myself in a similar situation where you have to really consider what is needed here and with descriptors I always say and it's the intention. It has to be think of the intention was that what is the sound there for? Is it there just to be completely atmospheric? Is it there just to be for the sake of being there? If someone bumps their elbow on a table and it makes a tiny little thud, we seen them bump the elbow. Does that thud really need to be presented as a sound descriptor? In most cases probably not. But for example is there's a massive crash and that you can't see on screen? Of course we're prioritising that. So like I say if I had advice for any beginner in this industry it to really focus on the intention of sound and consider that same process when you go and create the files. However I understand that like you said there's so many subjectives that come with this and I think just more collaboration with the people doing the job and the people who are using the service. The deaf community I think if we continue a collaboration we're going to go and bring the lines closer together and create a more unified approach I suppose.