 Okay, so well, good afternoon everyone and thank you for this invitation to talk about music in the Transcriber's User Conference. So, well, let me start with an introduction to the problem that we are dealing with here. So the thing is that music is a key element of our culture. And humanity learned to preserve and transmit music by writing. And as a result of this, we estimate that there are billions of pages with written musical over the world that represent as happens with the exorcist. The music sources also represent an incredible heritage that has been encoding centuries of human creativity. And this heritage is crucial for carrying out logical studies that will allow us to allow us to reach a knowledge beyond our current understanding of the evolution of music first. But that's so paramount to better understand our own cultural evolution. But the big obstacle in this context is that most of these sources exist only as physical copies that are stored in libraries and archives. And in this form, they are doomed to remain hidden and the huge potential to explore our music culture is prone to be completely wasted. So against this situation, there has been many efforts to developing technology that takes this music sources as input, creates their content and presents them in a format that makes their indexing and retrieval finally possible. And this is known as optical music recognition or handwritten music recognition when applied to handwritten sources in an analogy to handwritten test recognition. I'm going to explain now the technology behind optical music recognition or handwritten music recognition very briefly. So the task of teaching computers to remixing has been studied for years. Currently, there exists some available software for optical music recognition. And well, although they don't mention this explicitly when advertised the product, the software solutions only work for modern printed music. And even in these conditions quite often the results are said to be disappointing. So in recent years, my research team has been actively working on further developing this kind of of music reading systems for all types of music, but especially for a written music heritage. And we have many recent use cases demonstrating the success of this technology with different sources and manuscript times. The audience has allowed us to determine a general workflow which can be adapted to multiple scenarios with moderate effort. So what we are trying to do now is to transfer this knowledge that we have developed in scientific field, let's say or in controlled scenarios, want to transfer this knowledge to usable tools, so that all people can benefit from. And then what follows, I am going to present these rather generic blocks for optical music recognition or handwritten music recognition, using a use case about pension music manuscripts. So musical documents include different types of information. In general, for the purpose of written music recognition, we will be interested in two of these, the notes, plays on the stats on the staff lines, and the lyrics. They represent the text that must be sung. Also, we can also find some ornamental letters that contribute also to the text to recognize. So therefore the layout analysis states must first detect and categorize the different regions of the image to know the type of information they contain and where they are located within the image. Once we have isolated these components, on the one hand, irretion with musical notes is processed by a handwritten music recognition system to recover the musical information that is encoded in the image. For this, we have developed some kind of general music language that basically encodes the, the class, the, the, the first sign that indicates some information about the, the meaning of the rest of the notes. And then add general language that gives the information of where the notes are placed on the staff. So as I said, this process obtains this as a textual encoding and add up textual encoding of the written music. On the other hand, the regions must be processed by OCR or HDR systems, obtaining the list of characters that are contained therein. Here with music, we have an additional step that we must separate the text into silhouettes, because we need to know how to, how to put together the words to be sung. Once the musical information, notes and text are recovered from the image, this must be transformed into a structural digital format, a standard format that allows subsequent processes. So there are many of these in music, among them we can highlight MEI, the analog to TAI in the musical context, but there are many other possibilities that have a wide support. And there are even some conversors to navigate from one to the other. But basically, the idea is to finally represent the music content in a standard format. So, so far, this is what we have developed in our, in our research, in my research group. And the final step is to deploy these techniques to generate usable tools. So here in this work, we propose the inclusion of the now how investigated in recent years into the transcribers tool. In this way, any music manuscript could be processed automatically in the same way that text manuscripts are already being processed by transcribers. And integrating this into transcribers infrastructure, we will allow all the users to use and enhance the existing OMR or HMR technology. So, basically to sum up my talk here, I would like to emphasize that the music context is an exciting application that with a wide range of possibilities. The research in the from the computer science perspective has already shown promising results. So we know that the task is doable with moderate effort. But this is just necessary to transfer this now how to use our technologies. Of course it's use case will require some minor changes in the in the general workflow but we are pretty sure that are, we can, we can use our general blocks for processing music. So, the, the main, the main conclusion of my of my talk is that the current state of affairs represents a good opportunity to increase transcribers is a scope of application, including all the music heritage that we can found all around the world. So before ending, let me say that we are looking for collaborators who would be willing to join in this into the development of the technology into scribbles. Currently the project is led by, well, on the one hand by myself from the technical computer science perspective. This is also by Robert Lucida from the point of view of digital musicology. So please contact me or Roberts if you have any question or interested in this project. And that's all from my side. Thank you very much for your time and attention.