 Our second presenter is Pallavi Baljikar, her title, Speech Synthesis from Found Speech. More than 6,500 languages are spoken around the world. Out of these less than 40 languages are supported by companies such as Google and Microsoft for speech synthesis. Our speech synthesis basically involves learning a statistical model of the human vocal production mechanism so that it can take in some text and vocalize it as speech. The parameters of this model are trained using pairs of text and its corresponding speech. These text prompts are designed to be phonetically balanced while the speech is recorded from a native speaker of the language in a recording studio. However, to build such a voice technology for many of these unsupported languages, you need these language resources and a native speaker. If you do not have access to either the native speaker or these language resources, it becomes really difficult. On the other hand, you have all of this rich multilingual multispeaker data that is available on the web in the form of audiobooks and YouTube videos and podcasts, which I call found data. So the goal of my thesis is to take all of this rich multispeaker multilingual data, the found data, and build an understandable speech synthesis system from it. However, there are many challenges in using this kind of data. First, it's extremely noisy and has a lot of variation. So the first part of my thesis looks at machine learning techniques that I can use to select a subset of this data, which is optimized for speech synthesis. Second, a lot of this data is either incomplete or insufficient. Many audiobooks and podcasts are only available as audio, so the transcripts are missing. So the second part of my thesis looks at how I can use external resources, maybe from the same language, or another higher resource language such as English, and use this data to augment and adapt my current found data so that I can build a speech synthesis model for my target language. Finally, the last part of my thesis looks at how I can build a unified model across a common subset of languages so that they can share model parameters for the common acoustic properties. I hope that the techniques that I develop in my thesis can be used by this little blind Tulu-speaking girl to listen to her textbooks and her class material instead of having to overcome the extra barrier of learning another second language such as English or Hindi. I hope it helps this refugee woman stuck in a foreign land to better communicate her symptoms to her doctor instead of being misdiagnosed because of the lack of a translation service. I hope it helps this little girl with motor neuron disease to design a voice based on her vocal identity, her age and gender, and keep updating it throughout her life instead of being stuck with the Stephen Hawking-like voice all her life. Thank you. Thank you.