 Yes, we're really happy to be here today. We're here for two reasons. One is that we are massive math geeks that's common with other people, a lot of people here. I want to, I always want, and also I always wanted to be in a really, really cool band. These cool texts, these cool bands need cool texts, and that's, we're not literally geniuses of the 21st century, and on the other hand, we found out that contemporary pop culture has pretty simple texts, and that's why we could thought, hey, maybe we can generate texts ourselves. So from the curb to the mountain heights, so we took the German rapper Bushido, who wrote from the curb to the skyline, and from Goethe, so cover Zeus, your sky with clouds and practice like the boy on Thistle's Decapitation on Oaks and Mountain Ranges. We, and Louis came, yeah, and the cost of the project, we gained a nice person called Louis, who helped us give us his server, because we, and therefore we can present the following feature. We have a website where you can try what you, what we are presenting now. It looks like this, röhrich.info slash gedichte. You can track this, and then you can enter a couple of sentence words or letters, and then the algorithm generates a poem, or a couple of lines from this poem, and then you can also Twitter. And then there's the official camp hashtag, and yes, so there's a warning. We're not specialists to maintain of bad mining servers, so that it's possible that the server crashes. We're sorry about that, but it, she's just, can't just show it how it could, that it could work. The great question is how could it work? Well, pretty simple answer with supervised machine learning. I'm pretty sure that there's a couple of people who know what it is. The others, I'll explain it in three sentences. What we need from machine learning is first data, lots of data, and then, and in couples, we need observations and goals. We need, what are these kind of observations? We need, for example, the weather data, and our goal would be to predict what the weather is tomorrow, or we have, from Facebook, we have an observation like pictures, and we would, and the goal is to find out who do we want to see. So, Lisa, Tom and Klaus, for example. And what we do then is that we throw our observations and our goals in complex models. That's the core of supervised machine learning, that we choose the right models, and we cure them in the right way, and then we can do two things. We can make, put in new observations, and then it tells, the model tells us, you know, when you put in a new picture, there's also Lisa on it, not only Tom and Klaus, and we can also take pairs, observe and generate observations and goals. For example, physicians do this to find out in which regions particles collide. This is nothing, not too much to do with poems and texts, but we know for once that we need data. We took data mining, we've got some data, one and a half megabytes of poems from Goethe, Schiller, Bushido, Sido, Kiteset, this German hip hop crew, and rappers, and we got 500 megabyte Wikipedia dump. Wikipedia, you can remember Wikipedia is not about poems, but we'll talk about what we need that for. Remember, after the data, the complex models come, so the problem is the complex models are, these sentences are sort of, you know, it's not mathematical, how do we get these sentences and models? We take a text, the decadent images also cover the sky, and we cut it up, and it's some syllables, so it could be words, yeah, we took syllables here, it could be letters, yeah, so cover your sky. And then the computerized units that they can deal with, and what no mathematicians need, so we number, so we index them. We give every single syllable, we take, give a number, and the B has a zero and so on and so on, and the space has an index three, and then we continue, and it's quite perfect, computers can work with numbers, and for the reasons we change the representation a little, we take it this way, we count all the syllables, in our examples there are nine, and we take a vector, and with nine zeros, and where everywhere the index is, is there's a one, so, and then that's where we get a vector, and that's where we get a matrix, and on the fourth, if there's a clear space, there's the space, then there's a zero, the one at the fourth point, so, fourth element, so it makes much sense to do this, you'll explain it later, so now we can get to the complex model, maybe not yet, I don't know, so what did we do? We did a lot of, we took a lot of text, and then we took it and made it to, you know, got the computer to read it, so we have the observations, it's what we read, that's pretty much the observation, and now what is the goal? Well, the goal is we've got to ask ourselves, what is the model supposed to do, what our model is doing, and what most language models are doing is taking the next unit and predicting it based upon the next one, so when I start with B, the next syllable should be D, or in English, C-O and V-E-R, and there are, with Himmel, we say Himmel, if we have Bedeckide and Him, then we would write Tuv-Mell, so we move the text fields by one, and the southern network has to know what comes later, so, and then we come to, we have a, yeah, we have a, yeah, if you see at the animation that, there, yeah, you put up the info and then that's the same in English, you know, and then we put it in the neural network and then we get out probabilities. So we take the, we take Mel, and then what, yeah, what does the model look like? We take Mel, the syllable Mel, and then we have the representation here, so it doesn't get into the model as Mel, but as 0, 0, 0, what I want, and as a vector, and we multiply a matrix to it, that's the weights, and we get a vector by that, and for those who aren't that comfortable with maths, you just got to, you know, just wait and wait for it, then you get two poems, and the vector that gets represents the Mel, we multiply with matrix, and then we get the representation at the end, and inner representation or hidden representation of our syllable, and then we take the same with another weight, and then we call way W out, and that's, there's a vector that there's, well, the, yeah, the values add up to one, if you think about it. The smart one will notice anyway, I'll explain it for the others, the super probability distribution. We can, for one, for based on the syllable Mel, we can say probable, we have a probability of a space, so, ideally we get space, so if we want the network, if we want, we want to know what's coming after Mel. So we do the same, the hymn, the syllable before, we multiply with the same matrix, we get an inner representation, and we have to introduce it, that's what we need another matrix for, and add it to the inner representation, and I can successively continue that until whenever, as long as we want to do it, and that's the whole secrets of the model, we can take a little later ones and then add them to it, and I hope that they get the information from the back, and yeah, at some point we get a sensible next syllable, so some people would say, well, what does that make to happen with poems, why is the role of the poems in there, and that's the black magic with this, that's you, the answer is that you have to train the network by gaining the weights, because I don't want to say too much about this because it's not that easy, but it took 20 years to get that. I talked a lot about the model, so I'm ready, so I want to take a poem from the first model. Victims are on the street with our friends, it is a street with the heavy street, the street heavy, it is as heavy as the tail is the tail, because the street is heavy like the tail, I am the tail and the heavy, and look, I am again in the pigs, and the street is heavy, that street is heavy, and again, it does not work, and the tail is here actually, might be, so what did we get to the network, we just gave him single letters, we did not give it syllables, but we did not give it a single syllable, so what he'd learned is, it learned to build German words, there are all German words, they are very similar, and it's a lot of the seven, the same words, but it learned some, and there's some grammar to it, it still sounds a bit strange, but all right, so we wanted to have more, and we took the German radio station, and they took the vocabulary in German rap things, and we saw that Zito and Bushido have a very small vocabulary, not as bad as another singer, but still very small, Helena Fischer sings German folk music, and it's not well liked here, so we decided to take a few of their texts, and we take Kuninger and Kai said, because they have a lot of words, so we take one of the very first pages where we say, we also add Wikipedia data, but because that makes sense now, because we say, hey, the complex data with the Wikipedia data, because that just learns a lot of basic things about the German words, because that way it learns where to pay commata, and we don't need that many data there, so we train our client with an improved data set, so we created the following thing, which you can also see on our server, from the corner, it comes to the high of the mountains, it's nice, the tail of the dice in the human and the pigs at the ward follows through the day selling, the hearts in the beds of the night, the stars heavy the body, from the mouth in the street, and the white rapper, women saw the code and the string and the tail of the women, I gave the king and so on, so still makes no sense, and a lot of different words just put together, so an last example, I wanted to say thank you for everybody who helped, and a great welcome to Lewis as well, that he helped a lot today, great applause to them, and to Fabian as well, they also helped that it all looked better than we had planned it, and all the CCC team that made this possible. Thank you very much also to you, that you dare to come onto the stair, that you presented it to the others, and now I also want to take the opportunity to ask some questions, there's a lot of time for questions, please come to the microphones, and the aisles come to the front, are there any questions, yes we have a lot of time, yes someone comes there with a question, microphone please put the microphone close to the mouth, hello, oh big room, awesome, I'm just a bit late because you renamed it and it was all a bit short, that sounded a lot like that's what the machine did according to the data, the song text you gave it to it, that right, yes, and there's a possibility to use that with other texts on your own, it is possible to do that with all sequential data, so you could put your Linux kernel into it, and it might put out functional learnings calling, you could add your math encoded math book into it and it will put output latech, and the other side, I'm an author and I would like to know what word count I have, can you do that? So I just learned the word word count, yeah that is the amount of different words that these artists have used for their songs, yes thank you very much, that was really interesting, yes hello thank you very much, I've played around with it during the talk with the website and I noticed that at the end there's a lot of, often there's one letter, just one letter, is that a back or glitch, and the other side, I'm sorry it shouldn't happen, what how many words do you let enter, just one to three words, try it again, I just wanted to ask what means, and it apparently means nothing, it's just a back, maybe I did not really get it, but which Wikipedia dump, why does that was included, what are, why did you include the Wikipedia dump, it was added and many people like to add it for really working models and you want to give it in the right direction, so they learned for example the German words from the Wikipedia dump, and then you just add the song text on top of that and because of that it has a higher weight than the Wikipedia dump, no because we added it afterwards and we lose some of the Wikipedia information, so we get more poem like structure rather than a Wikipedia like structure, there's another question from the left, did you do the same with the English Wikipedia, a lot of people have tried to do that with English Wikipedia, yes I can recommend Ali Graf paper or Ruff Ray Hinton if you want to look it up, and already made implementation is that also available, yeah several, thank you very much, and left again, hello and thank you for your presentation, I would like to know which extension you'll see especially towards rhymes or the verses, interestingly most of the models that work at the moment are with letters that has the advantage that the vocabulary, so the length of this vector is reasonably small, so they're just like just 36 letters and if you take syllables there are quite fast 7000 long vectors and usually most people just use letters and we thought hey we could try it with syllables so maybe he gets his rhythm integrated but it did not work that well, unfortunately with our example, on the other end it's just a really active scientific field, so a lot of things happened there quite fast but you could also you could assume if you give the algorithm a lot more poems maybe that would help, so we had more than, we had less than a fourth of just normal text that might improve it, can we send you poems to you and you just feed them or do we have to do it ourselves, it's not that hard to do it on your own but if you have more questions I'll be available, we'll be available later, please go to the website and try it please and tweet the funny poems it creates and we would like to be say sorry if should there be inappropriate content that might happen we quality rep that is not politically correct but it just may happen to be really bad so thank you very much to the two who gave us this many great new ideas and also thank you to you to listening into the translation this is a translation from