 Our next speaker is Gus Cher, who has given us the title Music Mind and Machine, Expressive Collaborative Music Performance via Machine Learning. In a music ensemble, musicians collaborate with each other to achieve a common musical goal. The art for every musician is not only to interpret music on their own, but also to keep in concert with each other by continuously adjusting all the musical expressions, such as channel, rubato, and dynamics, and in this case, we call it an expressive collaborative music performance. So, one question natural wise, how do musicians achieve this both important and difficult interaction? The answer is actually quite simple. They practice together, they spend a lot of time practicing together, and in music we call it rehearsal. It is the rehearsal that the musicians get familiar with the interpretation of each other while using their own expressive response. So, this particular procedure of training expressive collaboration while rehearsal actually suggests that we use machine learning algorithms to train an artificial performer in a very similar way. Now, imagine you've got a music band and you need an extra player. Wouldn't that be great if you can just download one from the web? And this is exactly the goal of my research. To be more specific, I seek to answer the following research question. How can we build an artificial performance that with rehearsal experience automatically improves this ability to collaborate with human musicians expressively? And to solve this problem, I conduct my research in three steps. First, I collect data from real rehearsals, which gives the expressive collaboration between humans. The second step is the most important, in which I design different functional approximations to reveal the relationship between the interpretation of different performers. In particular, they are modeled as co-evolving time series. So, the very difficult part is at each time point through a piece of music, the model considers not only the correspondence between different performers, but also the correspondence between performance history and performance history. The final step is just to train the model, get parameters, and based on that, an artificial performer will be able to synthesize a performance by interacting with human musicians. Our concrete dial shows that, by using four to eight rehearsals, we can create a fairly expressive and collaborative artificial performance. And I believe that in the near future, we will see music walls use this technology to serve professional music bands. Thanks.