 Okay, so now I think we do understand why you're so fixated on reading research and eye tracking research. Are there any key takeaways that we could summarize from eye tracking research on subtitling? So you're asking do we know anything useful that could be implemented into your reality? That's how I understand your question. Yes, like for us if we have a person who haven't had that much experience with eye tracking research, we could of course go and read all the papers for your research and so on but let's say we don't have that time. So like what are the most important lessons you think we have learned as a subtitling community thanks to eye tracking research? For instance what we learned from an eye tracking study by my colleague Pablo Oromero Fresco on live subtitles that are displayed not as blocks but word for word or phrase by phrase thanks to eye tracking research we know that reading such subtitles resembles walking on quicksand in a way because you're reading reading reading and then your eyes are there ahead of the subtitle ahead of the word in the subtitle then you go back then the word appears so there's a lot of chasing with your eyes and it's thanks to eye tracking that we know that this type of subtitling is not particularly conducive to reading so it would be better to display subtitles as blocks that of course has other sets of issues related to delay but that's not the point so that's one piece of evidence that we can implement immediately thanks to eye tracking research. If you're asking from the... Sorry to stop you there but it seems like those guys at YouTube haven't read it because if whenever I go into YouTube and if there are those automatic subtitles they are always displayed word by word so you're saying we already know for some years from research that this is very detrimental to our reading process. Yes Google people if you're watching this please make sure to change this I understand this is the result of automatic speech recognition going word by word or phrase by phrase and this is how it's displayed by the way you know line breaks and text segmentation in those automatic subtitles could be easily improved as well for instance you could implement some rules but after a comma or full stop there is a new subtitle or a line break there's lots of room for improvement I would say. So these automatic subtitles should get a bit smarter that's that's used to be the conclusion. That's right.