 Hey everybody, whoa, this is loud. This morning from Mutt's talk, I learned that all of us are language designers, so I'm a language designer too. So I had this idea to establish a new word in the Ruby community. And that word is to Ruby as a verb. So maybe you can all help me and say that with me. I love to Ruby. I love to Ruby. Thank you, awesome. So I love to Ruby too. I Ruby a lot. I Ruby forms and APIs and yeah, sometimes I get really bored with these things and then I choose an interesting topic and I research it and I submit a talk to some Ruby conference. So that's why today I'm talking about artificial intelligence or neural networks to be precise. By the way, I'm Rin, I'm talking about neural networks. So before I start, why am I talking about neural networks? I mean, this whole concept is really old. Actually it's older than my grandma. This thing is an artificial neural network implemented in hardware and it was built 1957 by a dude called Frank Rosenblatt. And I think Aaron ranted about the rails middleware stack earlier, but try debugging this. Then I mean, this was like the first thing was like really in this era where people were really optimistic about artificial intelligence. So I think actually about this thing, there was a New York Times article that really enthusiastically claimed that this thing would be able to learn like a three-year-old child. So didn't happen. So from the 70s through to the 90s, there was this so-called AI winter where all this optimism disappeared and the research programs were frozen and so on. But nowadays artificial intelligence and neural networks are actually used in a number of applications. So recommendation systems, for example. It's just that this, oh my God, wow, artificial intelligence thing. That never really happened, I think. And I think it's just for maybe in June when Google showed us these deep dream images where instead of recognizing things and images, they had neural networks dream up things into images. So if you're seeing a really trippy image up there, it's nothing that was in your coffee. So what I'm really trying to say is neural networks are still a thing. And the second reason is I have studied CS, and I guess some of you have too, but in the Rumi community there are a lot of people who haven't studied computer science or maybe they have, and they've just never taken that AI class. So this is also for them. Okay, first step, understanding what a neural network is. Let's look at the brain. The brain is a network of a billion of special cells called neurons, and they are connected to each other and they communicate through electrochemical signals. And when a neuron receives a signal, then it fires, and it then sends the signal to other neurons who in turn can trigger more neurons and so on. Yeah, that's the very basic thing. Now two more things about neural networks or about neurons. Earlier I said that when they receive a signal they fire, but that was a lie. When they receive a strong enough signal they fire. So you might say some of these neurons are more lazy and some are more nervous. So I call these ones the chill neurons, and the other ones are the nervous neurons. And how chill or nervous they are is defined by something we call threshold. So that dude is totally chill and he doesn't care if you poke him with a little signal. But the other guy, if you just notch him a little he will fire. So yeah, this guy has a large threshold and the nervous guy has a small threshold. Then there's another concept, I swear, this is the last before we go to networks. So neurons are connected, right? These connections have a direction and they also have something called weight. I mean, you notice that these two, I have drawn them thicker than the other one coming from above. So when neuron A receives a signal from B and C that has way more influence on the output of A than when it receives a signal from D. So that's why these two have more weight. Okay, enough with biology and neurology. Let's put on our computer science glasses and look at that thing from a computer science perspective. This neuron thing, it's an object that gets an input and it produces an output. And that's a principle we know really well, right? It's pretty much how functions work. Or cats. I don't know much about cats because I don't have any, but I always learn about cats and conferences. So let's get back to the function example for a second. Because this is actually how an artificial neuron works. As I said, there's input and output and there are connections and these connections have weights. That's the orange things. And inside that neuron, you can imagine that there's a function that determines the output. And there's a really easy example. There is a type of neuron called a perceptron. And actually, I showed you in the beginning this picture of this huge computer thing. That thing was also called perceptron because it was made up of perceptrons. Anyway, the characteristic thing about a perceptron is that its input and output are binary. And also the function that sits inside it is really easy because it's just the weighted sum of the inputs evaluated against a certain threshold. And threshold, you still remember those two guys, right? So because I don't really like math, I put up this function as a Ruby method. So as I said, it's the weighted sum of the inputs. In this case, we just have two inputs. So it's weight one times input one plus weight two times input two. And then we look at the threshold. And if that weighted sum is bigger than the threshold, it returns one and else zero. Okay. So that's the part about firing or not firing. So finally, a network of neurons because you're probably still wondering what this is useful for. As an example, suppose you have a network like this one and you wanted to represent a function, for example, XR, the explosive R. So if you input one and one, it should output zero. And if you input one and zero, it outputs one and so on. So I'll just add some weights to these connections. And I'll also add some thresholds because I want to demonstrate how this thing works. So as an example, I'm inputting one and one. And now comes the part where I have to do calculating stuff on life on stage. So I'm starting at the top. One times one is one. And that's bigger than the threshold of 0.5. So that neuron on top will output one. Okay. Then we have one times one and one times one added up. That's two and that's bigger than the threshold of 1.5. So that will also output a one. And the thing down there, it's exactly the same like up there. So we also have a one there. Okay. Now I have to add three things, but I think I can do it. One times one plus one times one is two and then plus minus two times one. That's a zero and a zero is smaller than 0.5. So that's zero. Done. So the output of this whole thing is zero. Okay. Little secret, this network actually implements XR. And if you're really bored, you can try it out with all the other values and see that it really works. But you can also just trust me and we can go on. At this point, you might ask yourself, can we build every function with a neural network? I mean, we just build XR. We can do R. And not is really easy. I actually drew it because it works with a single neuron. Yeah. And end and end work too. But no, it doesn't work for any function. There are certain prerequisites a function must submit to. And I'll tell you what these are later. Okay. So we now know that if we wanna make a network like this behave in a certain way, then we need to find the right weights for these connections. And the question now is, how do we determine these weights? Because guessing is a pretty stupid way to do this. And this is where it gets fancy. The weights are learned through a technique called back propagation, which is short for backward propagation of errors. Okay. And I show you how this works. Again, we start with this neural network. And we wanna teach this neural network to give us a certain kind of output for some input we provide. So we sort of have a wish list of these things that should output for our input. So again, I will randomly assign weights. You know this part already. And now I'll take that function wish list and I take the first row and input this. And now our network will start calculating and calculating and calculating and calculating some more. And finally, that's a keynote effect you haven't seen today yet. So, Magic, we got an output and it's 19. Only when we compare this to our function wish list, it's a little off because the expected output was 26. But I've told you, this thing is able to learn. And just like in real life, learning starts with making mistakes. Or, I'm sorry, I had to put this quote in here. Like Jake would say, sucking at something is the first step towards being sort of good at something. So that's also true for neural networks. Instead of discarding all this, let's look at the error a little closer. We're off by seven, so might be a lesser known variant of the off by one error, but it's not. Let's zoom in on that blue part for a second in the back of that network. Now that last neuron is like looking at its precesses and going like, hey guys, I mean this didn't really work out. Maybe we can adjust these weights between us so that we can minimize the error and get closer to the result. And that's what we're going to do. Oh yeah, secret formula. We first calculate the error. I mean that's what we've already done. The error is seven. And now we change the original weight just a little to get closer to the result we expect. So for this example, I'm just changing the three up there to a 3.2. And then the error gets smaller and we get closer. And I do the same thing for all of these other weights. And when I'm done with adjusting these weights, then in this case my error is already just half of what it used to be. So that was that back part. We adjusted the errors there and now we're just going on just like that. So again, we look at these errors and then adjust the weights of the connections leaving to them to minimize that error. And then we go on like that, like moving through the net like this from back to front, calculating the error, adjusting the weights. And maybe at this point you understand why this is called back propagation because the error is propagated backwards. And this whole thing, like when we add the front, we've done this for the first row of our training data. So we've minimized the error for this first row. And the next step is to do it all again for the second row. And then when we finally done it for all of them and we have adjusted these weights, then we do it again. And again. And a hundred times. And then a thousand times. And probably a ten thousand times more. And as you might guess by the state of that poor guy, neural networks are really slow learners. They need a lot of training. But the good thing is we have supercomputers doing this for us. So that's taken care of. Okay. So you now have probably, hopefully, a basic understanding of how a neural network works. Does anybody remember what the full title of my talk was? And I look it up. It was SkyNet for beginners using a neural network to train a Ruby Twitter bot. But since I'm standing up here and nobody will prevent this, I'm just going to rename it now to neural networks and how to elegantly deal with failure. So I brought a manual. First step is really easy. Don't panic. Although now I can add a little story of how I was like just lying on bed in my hotel room and going through my talk of last time and then the telephone in the hotel room rang and then it was one of the organizers being like, your talk is now, where are you? And I was like, what? It's Sandy's talk. No, you have to be here now. So of course it was Sandy's talk that was due. But I'm pretty calm now. So, okay, second step, admit it and tell the story. So the story goes like this. One day I was scrolling to my Twitter timeline like every day and I noticed there are really a lot of unhappy people on Twitter. Somebody should do something about that. So also at the same time I had this idea to play around with artificial intelligence. So I thought why not combine the two? So that was how I came up with the idea of combining a Twitter bot and neural networks. Okay, one other thing first. When using artificial intelligence, you have to be really careful because we all know what happens when this goes wrong. Like an ill-humored android with an Austrian accent and stupid sunglasses will come and kill somebody's mother and I really didn't want that to happen. So that's why I thought to myself just in case I create a powerful self-aware artificial intelligence that will take over the world, it'll better have manners. So I choose manatees. So as a prerequisite for this whole venture, I rubied a little service called, I don't know what it's called, manatee.pix and it just returns a random calming manatee. If you missed that meme, it's about pictures of manatees that have calming messages. So and the next step was to write a Ruby Twitter bot that would send these calming manatees to people and first it would just reply to everybody. And that worked pretty well except for that one time when I was coding on this later night and I accidentally connected that script to my personal Twitter account and it started sending out all these manatees to all the people that had ever replied to me. Yeah, the hashtag is manatee-calypse. But the awesome thing is that everybody was really happy to receive a manatee and some people even ask like why don't I get a manatee? Okay, anyway. So these were the first two steps of my master plan and the third step like where I thought like, yeah, I just take these tweets and then I'm going to analyze if the person is unhappy. Well, that thing started to get hairy with that because it's not actually an easy problem. So I would really shortly maybe tell what the problems are. I mean first to train a neural network, I said that you need to train it a lot and you need really good training data. So that's already the first prerequisite where many applications would fail. There actually is a corpus of tweets that are like classified according to their emotions. But it's not actually good. And yeah, maybe I will go on to the next step in my manual and that is learn your lesson. And the first thing is as somebody standing up here, I can tell you do your research before submitting to a comfort paper. I didn't actually submit this one. Although when I was like sitting backstage and closing browser tabs, I noticed that I still had this tab open that I used to make the screenshot. So yeah, it just feels way better to actually do something that's possible. Also maybe as an example, you maybe don't want to choose the hardest problem to solve. And then there's the thing about using the right tool for the job because neural networks are good at certain things like pattern recognition is one. And that's what she said, a classifier that tells you whether after a given sentence that's what she said answer would fit. Although I'm not sure maybe it will always return true. I don't know, I haven't tried it. Then like time-serious prediction or like recognizing something in an image, recognizing dogs and signal processing where you have to filter out noise and stuff. That's what neural networks are really good at. Oh, my favorite step, step four, add cute animals. So in case the amenities don't work for some of you, I've prepared something. Are you ready? Cuteness. And, okay. The last step is move on. I think this is where I should like go running from the stage but I have to do two more slides. Okay. Shout out to Bitcrowd who paid my flight and where I Ruby most of the time. These slides are under a CC license and I'm totally going to put them online later. I forgot to do that now. And yeah, that's it. Thank you.