 In this arrow, when everybody's buzzing about big data science, I'd like to also ask you a more humanistic question. What makes language language? And what makes music music? Because language and music not only share a common evolutionary origins and neurobiological processing, but they're also pretty much the only thing we humans do better than all other species. Scientific and humanitarian reasons led me to develop the world's first automated translation services in the early days of the web. Because language are trade routes. Language lets us trade ideas, overcome barriers, share common ground. But it remains one of the hardest challenges in big data science. How do we learn complex relations between languages? Humans learn by seeing this round orange thing while hearing mama say, chou, ti chou. And we correlate the Chinese we hear against a second representation of the environment we see. Over many instances, we gradually learn chou means the round thing, ti means kick. Our robots today aren't good enough to do this. But when I first got to HKUST many years ago, I noticed Hong Kong requires all government proceedings to be kept in both English and Chinese. And this led to the idea that we could approximate the picture by hearing mama speak Chinese while seeing an English representation of that environment. So the key question becomes, what kind of models can learn the relationships between any two natural languages? And attacking that problem gets us closer to identifying the universal DNA of language. Because language is what lets us think complex thoughts. And those thoughts let us create new languages with which to think more complex thoughts in the great cycle of intelligence. Language structures thought. Now, the traditional way linguists and mathematicians have thought of modeling language is, tell me if that sentence is grammatical. But this would not have done your average caveman very much good. What's really needed to survive evolution is to translate, look out a tiger's attacking from behind you into another visual or abstract representation that lets you decide quickly to run. But learning traditional translation models has exponential complexity. In attacking that problem, I was inspired by a longstanding scientific mystery. Why do languages across the world universally limit a verb's semantic roles to a maximum of four? I discovered inversion transaction grammars give rise mathematically to this magic number four in order to keep languages learnable. Because languages evolved to make translation easy to learn. Otherwise, your tribe became dinner. Translatability is what makes language language. And learning translations is the main cognitive capability underlying intelligence. Even within this magic number four limit, all sorts of powerful translations can be efficiently learned. The same principles apply to learning relationships for all sorts of languages. Visual languages, gestural and abstract languages, and especially musical language. Because music and language share common origins. Music is the evolutionary foundation of language. And what makes music music is, again, the complex relationships between the many different languages of music, to which we've begun applying our translation learning models. Because it's not enough to master one of these. You have to learn to translate lyrics to melodies, melodies to chord progressions, to dynamics, to rhythm. Here, we're applying our translation model to learning lyrics. Specifically, challenge and response lyrics in hip hop. The machine starts with zero knowledge, not even pronunciation dictionaries, and learns to translate any given challenge into an improvised creative response. And we said, why stop with English? Let's do this for French. Notice all the complex relations that have to be learned. The response has to be somehow meaningfully related to the challenge. It should rhyme, its rhythm should match to the challenge. Music, like other languages, forms trade routes to stimulate cross-cultural relations, just like the spice, tea, and silk routes did. Music is also one of the easiest ways to learn languages and to build relationships. We are building technologies that learn and technologies that help people learn. Technologies that build relationships and technologies that help people build relationships. The trade routes of language are one of the most important infrastructure investments needed today. And we have a long way to go, because machines today still have a lot of trouble translating even what a two-year-old knows. We invite you to join our conversation and efforts to build the 21st century's new trade routes of language and music on a radically large scale across all our regions. Thank you.