 Hello, everyone. I'm Raghura Sagnath, a final year computer science student of Beverati Hyderabad College of Engineering for Women. Today I'm thrilled to share about the role of creating AI to create music on this esteemed platform. I'd like to take this opportunity to talk about creating music with artificial intelligence. If I were not an engineer, I would probably be a musician, as it signifies the meaning of my name. I often think in music and live my daydreams in music. I can't remember a single day when I did not open up my music player. So let's quickly listen to a short musical composition done by using Magenta's AI before diving deeper. To create a music in an easy process, not only the professional musicians. So coming to the history, let's know when and how it all started. The earliest use of the computers to compose music dates back to the mid-1950s. As the research continues, initiatives such as Google Magenta, Sony Flow Machines, IBM Watson Bait want to find out if AI can compose compelling music. So these are the existing services. Traditionally, composing music has involved a series of activities such as definition of the melody and rhythm, harmonization, arrangement, or orchestration. However, now, we are significantly being able to generate music instead of a huge orchestra with deep learning models. Obviously, it's not readily applicable to every form of music, but it's a reasonable starting point, especially for classical music. When a piece of music is performed, musicians add patterns of small deviations and nonsense in pitch, timing, and other musical parameters. These patterns account for the musical concept of expressiveness or gesture, and they are necessary for the music to sound natural. So this trend towards the AI systems building their own self-sufficient understanding of musical elements was the basis of the higher level musical intelligence we see today. In the 1980s, David Cope, a composer and professor of music with his experiments in music intelligence, built a foundation for many current AI models on the market right now. First, music and its attributes are encoded into databases, then the collection of the recombinant segments are extracted using certain identifiers and pattern matching systems. From there, musical segments are categorized and reconstructed in a logical musical order until new music output is produced. Now we'll see how Magenta can be used for creating AI music. Why only Magenta and not any other existing services as mentioned before? Magenta is an open source library powered by TensorFlow, mainly for manipulating music and images and to use these data to train the deep learning models. The Magenta has three phases. Deep learning algorithm, auto encoders and music VAE, data set and training. Initially, the deep learning model has to be built for the Magenta library. The music VAE function allows the encoders to create music. By interpolating musical data, a sequence of music notes is encoded into a latent factor and then decoded into a sequence again. The latent factor is a type of a model which allows high-dimensional input data. This allows for more realistic and smooth interpretation of the music sequence. And eventually, smooth and realistic output can be generated. Interesting, right? I had a lot of fun and learning while learning about such emerging trends. Music is such an amazing topic to work on and combining artificial intelligence with it, it just makes it superb. So thank you all for being a wonderful audience and hope to see you again.