 I think that's the tutorial. We are going to go through some conclusions here. I hope you have enjoyed, or at least learned something or not. We actually know everything before. So the conclusion here is that the structure of high-level semantics from text is useful in MIR and in musicology. But, of course, there is already an end-of-game methodology to really explore and please send the information. So we just try some ideas and get some results, but there are a lot of possibilities. Then we have seen that word-in-venues and deep learning are always a new world of a lot of possibilities. There is a lot of research that can be done using MIRs. I know this is really the things that everyone is publishing in conference, or deep learning and more and more. Also, this tutorial was an initial attempt to boost interaction between MIR and MIR communities. What we want is to convince people in MIR that use text features, is interesting, or use combined text with audio, not only use audio, and also convince people in MIR that music corpora is interesting, and they should create data sets for them to try the applications, the approaches. So, you can see, we have here a list of the different data sets we have for smooth learning, classification, recommendation, there's an artist, songs, biographies, albums, so they are all freely available. And so if you don't have these lights at the end, please, also another trace that we have here. And, okay, the challenge, we are going to... Well, just a reminder that there is a task three challenge, we focus musical entity recognition and extraction in the context of the European Simulti-Web Conference in Port Talos, so those people, it seems to be a very famous place to be there, because last year there was a break there, too. And yeah, it's a challenge where we invite systems to perform the entity recognition and linking in a semantical context, so there are certain requirements in their actual format. And this is, as far as we know, the first time that proper shared task is presented for in the intersection between between natural language processing and this information retrieval. There will be, there are two sub-tasks, one of them is the recognition and the other one is the linking. Systems are invited to do well in both, but it's possible to only participate in one of the two sub-tasks. And in addition to the overall winner of the whole Open Knowledge Information Extraction Task, which is three sub-tasks, us, because of the funding and support by Maria de Maestros, we have a cash prize for this task on its own. We are, we don't know the quantity yet, but it's going to be enough for researchers because we do science or we don't know too much about money, so whatever we get is going to be good. And yet, it's those who want to participate, they're getting tasks with Sergio, myself, or Alvin, who is getting, with the charge of the course presentation. And I think that's it for the challenge. Okay, well, here is an example of the challenge. Will you say it? So in the challenge, there are documents like this and the task is to identify the type, like, okay, some simulantar food type. Well, first, identify where is the entity, simulantar food type, which type is, it's an artist, and what is the LED in music range. So this is the, so this is the, so this should be the output of the system. This is for entity recognition and this is for the TV. So here, we can see some, like, Paul Simon, for example, he said, the releases of the self-titled album, Paul Simon. Paul Simon, here is the album, because Paul Simon has an album called Paul Simon. So these are the kind of programs he has been speaking with you, we like to remark. Where is the conference? It's in, in Pennsylvania. When? Ah, well, it is the last week of May. And the deadline for the challenge is the 10th of March. And so this is my favorite slide of the presentation. The few to work with us. Now I'm finishing my thesis and perhaps, as many of the, of the research that have done on thesis, when they are at the end of the thesis, they want to work in other things, that are not the ones that here they have been working for. The thesis. Because they realize now what this is really about. Most interesting part. So, for me, now we did the, the, the, the hype, or the other, the most interesting part of the interaction between NLP and MIR, might be in chatbots, question answering systems, or in the combination of thesis and semantics, in multi-modality, in deep learning, in case 24, or using generative, generative models that are able to generate a lot of different texts, or text from now on. So all this, so if anyone wants to start a thesis on, on the MIR's data, they would, I would really say that here there is a lot of space for research, and you can get very, very fantastic results. I think in the future we'll see a lot of this. I would like to thank you very much to all of you for these three hours here at the study room. We have a question and a little suggestion. The multi-modal parts that you presented to us, do you have, are there results about how much you've improving the, the task of recommending or whatever? Well, I have a few results when I was in Pandora. It improves. So, the thing is that, the text values are used, the limits on the, on the task, but the test can be very, very strong, and the thing is that if you are using, in this case, artists in one side, and songs in the other side, so you have, like, text representation of the artist, or tags or whatever of the artist, and you have audio of the song. So, if you want to really go fine-grained in the recommendation, if you use the audio of the song, you improve the result, but if you also combine this information from the artist, because text-based classification is very strong. So, text features are stronger than audio features. So, this is the, the trade-off. So, you can combine them, then you, you, you can improve. So, this is, but I don't have problems, nothing. And I, I'm working on that, and I hope to, to have a paper with this, me. Another question? So, I think we all are released from the afternoon thing, or even so, that one. Thank you. Thank you. Thank you.