 Science, politics, economy. The cornerstone of all human culture is language. Babies learn language in just a few years, but for all that we know of the advances of AI, no computer has yet managed this task. So how do humans do this? Most researchers try to find out by studying how kids learn English, but English is only one language among over 7000. This averages to 35 languages per country. Languages differ enormously in their sounds, words, combinatorics, everything. So the big question is how do babies manage to learn any human language? We can study 7000 languages, so what to do? Years ago it suddenly struck me. If we find learning mechanisms that apply to the most diverse languages, we can assume that these are universal mechanisms. So I developed a method for extracting maximally diverse languages. And to select these languages we chose variables and searched large language databases for languages that differed most in these variables. And the vision was to study 50 languages, but funding agencies weren't ready to do this task and we didn't have the technology. But I knew this method would work and ultimately lead us to the underlying learning mechanisms of language. So I needed a proof of concept. Then I decided to scale down to 10 languages and build up a global network of researchers who already had data on these languages. And this resulted in the first concerted effort of a systematic comparison of languages as diverse as Indonesian, Greece, and so on. So what we want to know is how children learn language implicitly in real life. So we filmed them with their families and friends over many years and then we transcribed the data, annotated it, and then mined it for patterns both in the speech of children, caretakers, and other people around. One of our most striking findings is that the speech that children hear is full of subtle statistical cues that allow them to learn language. We don't notice them when we talk, but children's brains are able to pick them up. So words and their parts come in repeated environments over and over. For example, words for toys come with a word play in a variety of contexts and kids pick up these patterns and this helps them then to build up categories and their combinations. Most people think kids just imitate, but in fact they use a form of statistics and computational power that's there from the beginning. So even without hearing all words or work forms of a specific category, they still can learn them by recombining them and cutting them up and recombining them. So by now we've delivered a convincing proof of concept, many exciting results, but we've only scratched the surface and to truly understand the underlying cognitive mechanisms, we need to embrace learning strategies large scale, with big data, many more languages, with many more kids. And now the time is finally ripe to embrace the stunning diversity large scale. We have enough field workers ready to go to the most remote places. We have good tools to speed up data processing and we have developed innovative quantitative methods to analyze the data. With this big data we will be able to go way beyond what experiments can test because we can really find out what children do in real life under real conditions and we can then bring back these results to experiments to hone in to more specific learning mechanisms. But more importantly even, we will be able to tackle individual variation despite of the fixed learning mechanisms. Every child learns language in a slightly different way and the critical question is for parents, educators, therapists at what point in time is that still normal? When is it pathological? So if we understand how kids learn language we can build more efficient artificial systems, we can help people with linguistic pathologies and we can develop teaching strategies which are more relevant in our global world. So I think this research matters for society. Thank you.