 Welcome everyone. It's my pleasure to have today for this Integrative Research Seminar, Emilia Gomez. Emilia Gomez is an associate professor in the Department of Information and Communication Technologies. She's actually a Serra Hunter fellow. She got an engineering degree in signal processing from the University of Staviglia. Okay, that was some time ago. Then she got a PhD at our department and she has done research stays in all these years in a number of places including Stockholm, Sweden, McGill University, and the Center for Digital Music at Queen Mary University of London. So, without further ado, here, Emilia. Okay, thank you very much for the introduction and thank you very much for being here. I'm very happy to talk about my research with deals with music information retrieval. In this talk, we talk about challenges and opportunities of this field that was on classical music. First, I would like to convince you all of doing research on music or at least playing an instrument. Music has been found to be very positive for cognitive stimulation. Not only for learning but music is also very important or has important links with memory and emotion. For instance, memories which are attached to music can be retrieved faster and have a greater influence in our mood. Music in general has a strong influence in all areas of our lives, such as our social relationships or relationship with peers and also our culture. But my research is not only about music but about music data. One can say that there is one song for every heart, so there is as many songs as people in the world. In fact, only in online music platforms we can find a huge number of songs. For instance, in iTunes or Spotify we find up to 43 million songs which are listened by 100 million users. Not for each of these songs we can also find for instance that people do versions or play them in different social networks. For instance, in YouTube, if we try to find versions of Mahler's Symphony No. 4 as you see on the figure, we can find a lot of versions of this song. Not only that, one single version can be made of a large amount of data. For instance, if we have a recording of Mahler's Symphony No. 4, we can find not only audio recordings but we can find data related to video recordings. We can also find score information. We can find textual information which is related to the composer, to the piece, to the performers. So there is a large amount of data that is associated even to a single recording. So this is the kind of thing we deal with large scale music collections, not only audio but anything. So music information retrieval tries to find things in these 43 millions of songs. Try to get technologies that can help you to retrieve music. And in the same time, we would like to understand how a person will describe each of those pieces of music and how can we emulate those descriptions by computational models to do a large scale analysis. This opens up a very huge amount of applications and also research areas because you can do research in quantitative data, in large scale data. So this is the methodology we usually follow in our community. First, we try to talk to humans and model the way they would perceive music or the way they would describe it and depending on the application we will find some descriptors. So we integrate knowledge from human perception and cognition. Second thing we do is as we process data, we process for music signals, then we have to deal with acoustics and signal processes. So some of our research is related to this area too. In fact we are dealing with audio or with music so we have to incorporate knowledge about music theory. For instance, music intervals, how they are built, how people use consonants and dissonants or build scales and so on. And finally as we deal with large amount of information, we have to apply methods and also contribute to improve algorithms for machine learning or for pattern recognition. Our focus, and particularly at the music technology, is to build real applications for real people. So we are also concerned with very practical aspects of research such as if this can work in real world scenarios, if you can check about scalability, user-centered design and so on. Because we have discovered that maybe some algorithms are very good for a particular application but totally useless for another application. So the music information retrieval area is not a community of people that meet around a conference which is called the ISME conference and there is a society which now I am president-elect and it has organized a conference in different places. This year it's in New York and the idea is that there are people from different areas from musicology, from computer science, from digital libraries and so on. And it's also very well connected to the industry because many companies are building applications related to our research. So the evolution of the field and first from talking about a particular application can be summarized here. People started to describe musical scores because this was the only information we had at the beginning and then musical scores are usually found for classical music. Later on people started to describe not only scores but audio and then you can have audio signals from different styles and you can have large scale analysis. After people tried to describe not only music content but also context information for instance people doing a lot of research on web crawling, on incorporating Twitter or other social or other information which is on the web images from album covers for instance to describe music. But recently people are working with multimodal data with many different sources. They are trying to do deep learning or this kind of huge machine learning techniques in contrast with the featured engineering that we call trying to get good features. And then people are moving from system-centric to user-centered designs so incorporating also personalization for instance into the models that we built. In this talk I wanted to focus on a particular repertoire because I think it's easier to explain the challenges than with general problem which is this scenario. Imagine you go to a classical music concert you will see this orchestra. I don't know if you go very often to this kind of concert this audience is kind of old. You see there is not many young people and then you see there is a conductor like hundreds of musicians playing along with the conductor. So our project was trying to ask ourselves which are the main challenges that people find when they go to this kind of concerts where they go to a piece of music they have never listened to and they want to discover. And if our technologies are ready or can help to provide meaningful or to enrich this information help people better or in a personal way enjoy music concerts. So this is an European project I have been coordinating over the last three years here I will focus on the work that our group has been doing and it incorporates partners from music for instance we have one of the best orchestras in the world which is the Royal Concertgebouwork Orchestra in Amsterdam we have also a conservatory from here that we were very close in this project we have also some technological partners from Austria mainly and we are having so much fun that even if it's finishing or it was finished we continue we have some spin-off projects from there. So the idea we have is to transform a music concert into what we call a multi-model, multi-layer and multi-perspective digital artefact. What does it mean multi-layer? That you can see the concert in different modalities you can check video files, you can look for audio files or text what does multi-layer mean? That you can describe different things within the concert you can describe the structure you can define the look of the musicians and so on and finally multi-perspective is that we want to access this content by different ways for instance by different locations of the auditorium or by different perspectives for instance the perspective of a naive user or the perspective of a musician. So then we have worked in trying to provide facilities to explore, enjoy and share the concert and we provide technologies that can be used before during and after the concert of course in a tablet or in a computer. So in particular we focus on this scenario which is virtual concert guide, overseeing the music focusing attention and switching viewpoints and joining the orchestra and I will tell you a little bit what have been done in this respect. Of course when we address this repertoire in the community it's very challenging because we usually work with songs and songs are very short even the most research is done in thirty-first seconds of songs so not even the whole song and here we deal with pieces which are one hour of duration in addition we have pieces which are very complex it's not the same to analyze maybe pop songs which is built by three, four chords then maybe this kind of music which is very complex and we have a lot of data associated and acoustically it is also very complex this is an image of a prototype I will show later on different instruments playing that so there are sections, there are also different timbres that we have to analyze and of course we have different modalities so this is the repertoire I won't get into details but just to have an idea of the kind of data we have we have different cameras, different microphones we have many information and maybe depending on the quality and also on the information you have you have maybe around 100 gigabytes for one of these concerts of course if you compare with different performances of the same piece then you will have it multiplied by the number of performances one thing I wanted to mention is that working with classical music has an advantage that in fact you have the score of the music this is not happening for instance in pop music where you don't have any score but the musical score is available but still not at all fully digital so we have problems in that so the first thing or the first thing we did is to try to develop a tool that you can have all this information accessible for building applications for learning training models and so on and we contributed to our repository which is developed at the MTG which is called Revobis which is a web browser where you can store information, multimodal information about the classical music concert and I will now get some time to play you some music and then you can maybe check or visualize different information that we do just have in mind that this is all in a browser so then it can be integrated in many applications using API you can always build applications on top of that so it's a web system and then I will play you one of the pieces we have worked in the project so this makes you maybe realize all the information we can get from the same concert and then different ways of exploring this information of course this was all the visualization and so forth for the framework but not all of them are used in the real application now I wanted to comment to you about five research topics we have been dealing with because our idea then is to provide facilities to describe this content, interact and we have been mainly focusing on these different topics and I will very briefly talk to you a little bit about that the first one is melody you know melody is the thing if you listen to a song or to music and you want to remember or imitate you will imitate or you will remember the melody it's the things we can easily remember better than for instance the instrumentation so the melody extraction methods try to simplify music into a contour this is if you see now there is this song twinkle twinkle little star and if you sing it you will see this is a melodic contour so we try to go from a very complex piece which is the symphonic music to this simple melodic contour with a difference that we don't have the score so we have only an audio signal so why we want to do that because for naive users it is more intuitive to look at the melody than look at the score so if you want to retrieve music you can sing it and then you can compare it with different songs or different parts and then you can find the part you want to make of course melody is related to pitch which is a perceptor attribute of people where we can differentiate sounds and it's linked to to fundamental frequency which is the signal property we want to measure which is related to periodicity in the time domain signal or in the frequency domain of course this is an example for a very simple signal the signal we have in orchestra is very much more complex so which is the goal of our research the goal of this research has been to extract from audio signal just the melody of course and this is the currently the piece the work by Juan Jovoz which is here in the audience so of course if you get popular music or songs for instance this song this is there was something inside of you something I thought that I would never find Angel okay so extracting the melody is kind of difficult in fact from the signal but from a perceptor point of view it's easier and this would be the melody sorry what happens? so the task of melody structure is equivalent to the task of describing the voice and then the pitch but if I play you an excerpt of classical music try please to sing along with it and define which will be the melody let me okay so you see it's more complex there is in fact there is a very challenging to describe melody in this type of material because there are many overlapping sources so the signal is very complex in addition there is a melody is played by different instruments sometimes they play together in unison that means the same note but sometimes they play different notes still you perceive the same melody so we have been working on trying to extract melody from this kind of data those algorithms for those of you that who do not know are based on three different principles and here I put a diagram of a method that Justin Salomon and I developed in 2012 which is mostly thought for singing but which incorporate the three principles that I was mentioning the first one is saliency so we might say that the melody source is more salient in the spectrum or in the signal than other sources so we try to estimate saliency in the time domain or frequency time representation of the signal this is a spectrum and then try to estimate which source is more salient I won't get into the details of the algorithm the second principle is to incorporate the continuity rule melodies are usually pitch values which are continuous you cannot have a jam of one octave from one millisecond to the next or 50 milliseconds so we use continuity rules this is also ideas from computational auditory analysis area where you try to incorporate those ideas to track melodies and finally we applied musical knowledge why because melodies maybe have some properties for instance they might belong to a scale they might be related also with other instruments etc so this is state of the art algorithm that we use and then from that we try to apply it to orchestra of course the first thing we do when this is the usual methodology we got many musical excerpts from different pieces of orchestra music you see here that this is a statistics of the melody play of the instrument playing the melody sometimes, oops, sorry sometimes there is a string instrument that play the melody brass or woodwinds but many of the times there is an alternation or a combination of instruments from that we try to annotate which are the melody or which is the melody and for that we ask people to sing along with the music so we ask people to listen to the music to listen several times to memorize or try to to learn a little bit and sing along and then we analyze aspects such as their perception, their agreement how this is related to different musical properties for instance if the melody is very complex it's more difficult to sing how this relates also to to our capability of singing of course because maybe we cannot sing a certain intro of us so we have analyzed that and I will show you just an example and from that after measuring the agreement we have a collection with annotated melody that we can use to compare to our computational models I will just show you an example of the melody you heard before let me play a little bit so that you'll remember and then how four different people sung this melody I guess some of you might have taken part in this experiment music ok so you see people do it pretty well they have some problems but they mostly agree on that but if you look for for algorithms they do much worse music so I guess you would agree that you cannot use that for any application so we have been working on that on improving that kind of algorithms we have also published this data set for people to evaluate their methods on our particular material and we have managed to get good results using algorithms maybe without getting into details more general not only focusing on singing which have also can characterize not only one melody line but several melodic lines and also that can be also using the score if we have it available and of course there is a competition in our area there is an annual competition where you submit your method and it's evaluated against others and we have very good performance not only for symphonic but also for other genres like we managed to also contribute to the field and we have developed also a prototype for visualizing melody let me show you where is it don't see my screen now ok but I will then skip this so you can maybe if you have time later you can connect it and then you can just visualize the melody while you are listening to the music ok so the second topic I wanted to mention is about structure musical structure is very interesting topic of research especially in this kind of repertoire why because you have pieces which are one hour long and then you want to know how it's organized for instance some people might want to know how to they have to applaud or when is the solo is going to play or when are the important themes which appear so musical structure is a very nice topic of research structure is related to repetition and also to changes so we use principles from pattern recognition we look for patterns in the music and we also use principle for continuity and novelty detection in the music so in fact experts also do structural analysis so when they analyze music they tend to structure into segments this kind of analysis and they name it for instance a or like exposition development and so on so structure in symphonic music is built by two different resources composers first use tonality for those of you who know music tonality is the way we organize pitch information like note information in fact tonality is used to to convey tension so when you have something which is in the key you are more relaxed and when you have something which is not in the key you are more stressed so people and composers use that to communicate the structure and of course in symphonic music you have a lot of changes in tonality in key which are called modulations for instance this is the typical structure of the sonata form which is used in many symphonies but not really as it is but more developed way where you always have an exposition which is in the tonic, a development which is in the fifth and then recapitulation which is in the tonic the other resource that composers use to communicate the structure is instrumentation so it's how you make instruments play of the instruments of the orchestra play and which notes do they play because by making instruments play different notes you can maybe get some dissonance effect you can get some effect also of very well known effects for instance the contrast between section or what we call the orchestral crescendo that there's always a change in number of instruments and orchestration so these are the two aspects that we try to model computationally and in order to evaluate of course we need to evaluate data and annotate it data so instead of asking people naive listeners to annotate one hour recording of a symphonic piece we got experts because you know many experts write about music they write about the structure of pieces and we got this piece which is the Oroica which is the third symphony of Beethoven so many people have written about it so we got like eight are the papers that have provided an structural analysis of this piece and we also gather notes from ten symphony orchestras the program notes you know the oops it was not my slides maybe it's and to find out which kind of information when you work with musicians or with musicologists they do not or it's usual that they do not agree for instance we tried to see and there was a lack of discrepancies so music analysis is also very personal but in order to arrive to some ground truth annotation we put a threshold of if three people agree there is a boundary and then we only with that we found 16 segments of boundaries in the exposition of the Oroica symphony so that means a lot of segments you have to detect we put those segments into this is the red lines you see here and then we use some of our computational methods for instance one the script we use is tonality we estimate it directly from the audio or from the MIDI by using information from cognition like how the tonality is defined and then we do it in a multi-scale way so this is kind of similar to wavelets where you have a global descriptors and then you have local descriptors for instance this will be the global key green maybe it's I don't remember the key but it's one particular key and then local key and then you can find segments of course there are segments that were found by using key information other ones were not found and in addition we also did some analysis of instrumentation orchestration and we found that that all relevant structural boundaries correspond to changes in instrumentation for instance here you see we have a simple representation of instrument and when they are active in the piece and we can then identify if they are playing or not playing we can identify in those segments so with all those symphonic pieces are very complex and additionally we can model the structure and we can represent it to people so that they have a map to follow when they listen to a musical piece of course there is a difficulty of large-scale evaluations so if you want to do evaluation over a lot of symphonies that's very hard because analysts will not agree they have different way of annotating and so on and also there are some features that have to be personalized for instance people that know music pay attention to something more related to music theory and people that do not know music maybe have attention to more instruments that are playing the three topic I wanted to talk to you about is instrument emphasis and this is the idea that if you listen to an orchestra there are 100 instruments you listen to the whole thing but we want to provide people the possibility of listening to particular instruments so for instance I want to listen to the cello or I really like the oboe or I don't really know how clarinet sounds so we would like to get people accessing the different instruments of the orchestra this is what we call instrument emphasis like I want to emphasize this instrument I want to listen louder so we have been working also on trying to from the acoustic signal trying to separate the different instruments after this separation trying to emphasize and trying to locate them on the scenario so that when we locate the instrument we separate them we can simulate different locations of the whole we can simulate that you are moving that you are going through different places in the auditorium those approaches usually are based on knowing at each moment which notes are playing each particular instrument so we need to align score and acoustic information for that and this is still a huge topic of research what we do in our approach is that we try and this is the word why Julio Carabias was also visiting professor here is as we know the number of channels and we sorry a number of instruments and we do the number of channels we try to estimate in which channel instrument is more predominant and from that we try to separate for this particular channel of course we have a multi-channel recording of the orchestra but the microphones are not located close to the specific instruments they are just on the top so in an array of microphones but they are not particularly targeting any instrument so this is the approach that we use for for source separation which is called isolating the instrument is based on non-negative factorization I guess many people use it for other purposes but we use it for that and we try to separate different channels I will get and we train with isolated samples of musical instruments to learn about the timbre to be able to separate on the mix of course it's very important to align very well the score with the audio because otherwise the results are very poor in terms of quality I won't play this example because it's very bad but the algorithms that do score alignment are still not very time precise so they have maybe one bit or two bits or have a bit of accuracy so we have also been trying to synchronize audio with the score and we have been done with this using some techniques from basic techniques from image processes so we try to look at the spectrum as an image and also not at the spectrum but at the gains for NMF for non-negative factorization you will try to really locate the starting of each note I will just show you now an example of instrument emphasis one minute of one piece so in principle you will hear one minute and then you will be able to hear this but play it only by separated instruments we have seen the quality is not the best one but if you mix it with other instruments for instance here you can emphasize the instrument and you can listen louder so you don't have to separate but just emphasize this is also a prototype we have on the web you can also check it where you can select the different instruments and you can listen to the different instruments of the orchestra and we can do things like for instance this video this was an engineering project where he created a way to mix up the different instruments to maybe visualize as you were in different position of the whole I will play a little bit of that ok we are not very good in visual at least you get the idea so now we are working in way to provide immersive experiences in classical music concert using this kind of technology of course we have been working with this material we were very proud of our results and musicians say this is very bad you cannot have it this is total so we still need to work on the quality of the separation and also we take advantage of multi-channel recordings but sometimes those are not very useful if they are not well placed so we will also to work which is the most optimal configuration for recording that the fourth thing I wanted to talk is about gesture modeling we have been all working also on trying to model the gesture of the musicians this is also a joint work with the smoke, with the conservatory where we have access to conductors and musicians and we have been trying to model the conductor because he is the boss of the orchestra so he is the one that is conveying with the gestures all the information to the musicians so we developed some tools to technology in real time to get gesture or mock up instructions for instance velocity or acceleration of movements of the conductor using Kinect and then also you can store all this information in ReboVis and access it later and for that we have been doing some things for instance we have been trying to observe how people move with the music, how people will try to conduct an orchestra and we have seen some strategies or common strategies of people that try to communicate different tempo or different intensity with different gestures and this was so maybe some of you took part in this study we have also been working on modeling expression for instance trying to model each person way of conveying for instance articulation like legato, staccato or conveying for instance different gestures and trying to use machine learning for predicting it so that you can control a synthesizer that will do sound that will be more legato or staccato depending on your particular articulation this will be presented in CHI also and you can see the video I don't have much time to go into that and now we are building a game which is called Becoming a Maestro which you may have played that try to do gamification of this trying to make people aware of which are the challenges when you have to direct a classical orchestra for instance giving the entrance to all the instruments or making sure they don't deviate from the tempo and we are building that now and we will have an evaluation large scale evaluation in the next future finally I wanted to to end up with the topic of music visualization during the last years we have worked a lot on extracting information from music so we can have information about the melody, the instruments playing, the gesture or the harmony, the structure but how do we present that to users so we have been interested also in visualization how do we visualize this information of course there is not much research on that so people just do research on how well do they describe music but not how well do they represent if it's and we have been collaborating also with some people that do analysis on users and so on and we also see that music is very challenging for visualization because there is some visualization that might be more suitable for certain people for certain user profile and also some visualization might be more useful when you are in the concert or after so this is also something we have studied I just wanted to show you a video of a concert that we did we have done several concerts in the project this one that I will show was in an event which is called Singularity Summit which is an event for people that do with technology and also there was a lot of young people so we wanted to make some educational educational concert to show different properties of the music to the people that were there and all the descriptors we strike we try to present it in a way people might understand so we got the piece and we added it into different segments and then we try to provide visualization of the score and the melodic lines the key or the tonality the instruments playing and the conducting gestures so all the aspects I have presented before and let me show you a video of that so that you can have an idea of the results thought about recording so the quality of the video is someone that took his camera but we had a very nice experience and also we are evaluating we evaluated with a group of users about their impression and we saw that for instance there is different needs of experts and knife users of the kind of visualizations they want to make there is the need to make the people want to have control over these visualizations so in a public space with a screen you cannot have control but maybe with computer you can select also there are some musicians that say it may be relevant as a learning tool to visualize music but of course people that like music they prefer to listen not to see anything so it might be even worse for these people and also we find that very interesting ideas on the need for a compromise between surprise and overview of what is coming and also between attracting attention towards over stimulation and this has been done in collaboration with the technical University of Delve which is one of the partners of the project so just to conclude I want to say that we have experience that state of the art algorithms are good for that of course there are errors we have to improve a lot our research method and we still have to understand many things about us as humans we want to understand music and how and also now we are seeing also that there is limitation of the state of the art algorithms for instance with this kind of material and also on a real concert because some things work but when they have to do in real time it is more difficult for instance the score alignment or the visualization of instrumentation you need some latency and it can be very bad for visualization also we have had the opportunity in this project that we didn't have until now to work on user center paradise because we have been we have been a lot working on the method the algorithm, the signal processing part and the classification part but not on the real user that how are you going to present that to the user and how is quality compared to accuracy so we have learned a lot from that and now we are working also to extend these ideas into multimodal description for instance together with Gloria Aro we are working and Olga she is working on the description of music videos also to to integrate visual cues into our descriptors of melody structure and so on and also we would like to continue on that project for enlarging our knowledge for instance to other setups I wanted just to finish with recognition to all the people that have worked on that these are photos we took that we need a new one because there are now some people more and also I would like to emphasize that we are opening our data and our technologies to the public domain and this is very nice because you get much feedback from the way people use that this may be very different to the one you thought and for instance this is from the melody instruction software so we want to also to keep that so if you are interested of some of these please let me know and then we can find out we are open to to provide all the data all the algorithms and so on so just to finish I wanted to invite you to our event that we will have at the end of March at our Santa Monica Ramblas and we will have a concert and an event which is called classical music in the 21st century so I hope you will have time to come and try out becoming a maestro game see a piano with some visualizations of in real time and just try to try out our prototype that we will also have there so you are all welcome to come thank you very much any question yes Emilia thank you for your talk very interesting one important part of the concert is the public so in your project do you take any consideration do you take any features any analysis of the public how they react to the music when they applause whether the characteristics of the public makes any difference they are all they are young and I haven't presented that because it's not really our topic but in the project we have been analyzing the audience there are some people that have for instance physiological sensors in other projects but in Phoenix we have mainly worked with in fact with evaluation and user behavioral experiment of users but the problem is that in classical music people applaud when they have to applaud so there is not such a thing for instance in other styles maybe analyzing the audio or the video you can get a lot of information but in classical music it's more difficult another thing we have done is to analyze for instance twitter and also how people tweet but of course in classical music you cannot bring your phone so this was the limitation of this particular setup in pop music and people we can ask people but for instance we had one concert and we asked people to record and then send the video so then we could analyze which are the moments they record which are the most we asked people to tweet and then we can analyze data so these are the approaches we are following like analyzing maybe audio information during the concert twitter or also from their videos or the photos they make during the concert thank you for the nice overview you start by or you say that you start by analyzing recordings and score and information that you have and try to extract information from it but actually can't you also extract information from the creation process itself for example I guess most composers use some kind of method in order to create music it's probably not a random process so using that information should also enable the analysis later on yes in this particular project as the composers are not available but we have used for instance information on the edition process for instance if the editors that are creating the final recording they use or the final video for instance the edited video they use information about different cameras for instance the way they do it is useful for learning about that or if we can for instance store multi-track recordings and also see how the producer created the mix we can learn a lot also from that in the composition process there are some initiatives that people can really document the creative process but still there are not much documentation but you are right that if we had a way to make it available it would be much easier and for instance some now there are some initiatives to make the data open when you are creating a mix so you will have all raw material to work with because then dealing with the stereo recording is very difficult but if you had all the information you could maybe do more things so yes maybe another question on not the formal analysis but sometimes you may know when a particular piece was created and the contextual factors and also the music was dedicated to someone the music came after a particular event in the life of the composer so do you use that in order to enhance for example the experience for instance I didn't focus but there is one use case which is digital program notes and there what we do we crawl the web with the composer, the piece and we look for music specific websites and then in order to build the program notes we use this information and we personalize for instance if we for we have three levels of musical expertise and then we can maybe for a music expert we provide more musical information maybe for a naive listener we also provide some historical context so yes we do we integrate that it's very difficult to personalize in a single person kind of thing but we initially try to use profiling personal profiling but now we have three levels of expertise Marcelo so yeah thank you for a wonderful talk so I have this this question personally I think a good part of the experience of being at a concert being able to hear the music coming from all around you as supposed to listen at home even if it's a good recording where you're hearing just stereo so the records you listen to at home have been recorded at the live concert say and but there has been placement of microphones and I'm assuming there's been some mixing going on so my question is have you thought or is it already been done or whatever using this kind of analysis of the performance to optimize the both the placement of the microphones and the mixing so that the experience at home is as close as possible to the experience at the concert yeah there's some research on automatic mixing then try to get the best mixing for each kind of setup so there is there are practical reasons also in the orchestras for not changing anything that you have so we have not experimented with that in the project for instance they had their own way to record and that's it you know and it sounds very good so but yes that you can estimate for instance the way they from the mix and the different channels then you can do the link is the same with the video you can do the link between fixed cameras and then the addition then you can know what will be better for each platform or for each setup of or here into that and with this virtual reality idea we are working with create experiences that you can be more immersive into the concert than stereo recordings okay thank you for the nice talk I have just a little suggestion is have you ever considered to look at ballet choreographers or ballet dancers to grasp in any sense how can you transform or how can you convert the music the acoustic music to the visual domain so maybe the gestures they use the things the moments that they emphasize the way they dance the music visualizes for ballet choreographers yeah that's interesting idea I have not addressed that but yes it will be a nice project to do we have been doing something on opera about also the movement of the same performer when they perform but not of dancing research but not we have done that thank you before we leave the faculty lunch has moved okay it's going to be in the tanker building ground floor room 22 okay the sector room or something like that so be there thanks again okay thank you