 You guys can hear me good. This is exciting. I flew over all the way from the Pacific Northwest up in Idaho. So that was an adventurous little plane ride yesterday. But I'm really excited to be here. Really excited to talk to you about my journey, my adventures with Elixir and learning how to build a neural network from scratch. My name is Carmen Blake at K-Blake everywhere else on the interwebs. But I'm going to start out with a quote by Andy Hunt. The only defense against constant change is constant learning. And so I feel like these last months and recent year, I guess, being in the Elixir world to some extent has been awesomely just constant learning. And so that's why you're here. I'm excited about it. Inside you're here. I'm excited I'm here. And I've really been meeting lots of people and just learning with every conversation. So it's been great. So neural networks, how in the heck? How did I get exposed to this? Well, over where I'm from in Idaho, it's not really known as the tech mecca of anything except for maybe potatoes or something, which I'm not into agriculture. But I was exploring some different meetups and dev co-ops. And I found one. And I show up probably about four or five months ago. And there's this one dude there just totally like, I'm learning all this stuff about neural networks. I'm going to run our dev co-op group through some neural network stuff. And I was like, oh, sure. Sounds good. Let's do it. I'll jump into some neural network, learn all about that. So everybody was able to choose what language they wanted to use. And there was Ruby, JavaScript, Python. And I was like, oh, I'm just going to keep digging in. The goal for me was, hey, I want to learn more Elixir. How else can I? I've been tinkering with it, doing a lot of little pet projects with it, and doing to that extent. But I really wanted to dig into something a little bit more substantive. So I was like, hey, I want to learn more Elixir. So that got me into neural networks. So once you start reading about neural networks, you have this initial thought of, what the heck did I just get myself into? It's a very, very involved field. And you can go as mathematical if you wanted it, theoretical. There's a neural aspect to it, a biological aspect to it. There's also a technological aspect to it. So to some extent, I was like, what did I get myself into? And furthermore, you see a lot of references in the mainstream. It's gotten beyond academic, because you can read as many academic papers about neural networks as you want. But mainstream-wise, LinkedIn, Facebook, Pinterest, Google, IBM, they're either coming out with either a neural network machine learning platform to leverage, or coming out with a learning material you can leverage, or various libraries that you can leverage. So I think the timing for getting into this field, I think it's really good. And it's a great new area, I think, to explore. So I definitely would encourage you to check it out if that's something you've been interested or kind of seen from afar. I think it's becoming more approachable all the time. So I need to run you through. I will jump to some elixir, I promise. I just need to run you through some neural network terminology to give us a little bit of a base to move forward with. So bear with me on this. I will move through it pretty quickly. But I do want to give you some of the fundamentals of a neural network. But if you take a step back and think about what a neural network really is and its purpose, it's really about patterns and pattern recognitions. It's really about learning something, giving a set of input patterns. What are some resulting patterns that your network can learn? And that's really taking a step back what a neural network really does. Your network is also continually adapting. It's continually learning. And that's a big thing you're going to hear with your neural network is training your neural network. Your neural network is learning. And so the inner workings of your network is continually adapting, trying to recognize patterns. All right. So just to give you a visual, to kind of break that down and look at something a little bit more visual is some terminology I want to use now is not only will you get an input pattern and result in an ideal output pattern, but you have these layers. And in these layers, that's where the magic happens. And so I will give you another visual to this as well that looks like this. So for example, an input pattern would be to swim. Well, what would be a successful swim? At least keeping your head above water. That would be a resulting pattern, your expected ideal output. If you want to cook, hopefully you can combine some consumable items that can be eaten. Daydreaming, you want to enjoy an imaginative story. So you have an input pattern, and you have a resulting output pattern that you want to happen. And in the middle, you want your neural network to learn and train to be able to produce that output. So at the core of it all is a neuron. And this neuron is very intelligent, and it has built into it some mathematics, it has built into it to be able to take the input values or input patterns from something internally via connection weight compute some sort of weighted inputs, calculate and activate the neuron, and then generate an output. Now don't get too wrapped up in it on what are you talking about. Just think about the patterns that I opened up with. There's some sort of input, and you want an output. Because this neuron is the fundamental piece of your network. And so if it has the ability to accept input and generate output, that's the start of it. So like I said, I can build off of this, and your network can grow, and the vertical slice of neurons there is your layer. So each layer communicates with the next layer, generates an output, which generates an output, and in the end, your output value, what's important there is you have a target output, which is your successful or ideal output pattern, and you have the output that your network produces. Two different outputs, and at the very end, you want to find the delta or the difference between your target output and the output your network generated. And that's really important because that tells you how far you're off. OK, it tells you how far you're off. And so the goal for your network is to get that delta smaller and smaller and smaller till the ideal output pattern is produced or pretty stinking close. All right? So this is just one part of training your network. The other part is what's called back propagation. Back propagation is the ability for your neuron to update its connection weights and reproduce the output. And you'd run this through many, many, many, many cycles, or called epochs. You run through many cycles of this to train your network. And the big picture of that looks like this, where you're activating your neurons from an input to the desired output. At the very end of the output, you want to check the delta. How far is the network output differing from your target or ideal output? And then you back propagate that difference back through your neurons. So it's saying, oh, OK, I'm off this much. Let me adjust the weights so that next time I activate, I'm a little closer the next time. And I'll do this over and over and over and over. And at the very end, your deltas should get smaller. Your error rates should get smaller and smaller. Big picture-wise, this is from afar how a simple neural network works. And like I said before, academically, mathematically, we can go into a lot of the internals here. But I really need you to see the big picture of how that works. So I can learn now. You can see that to the Rocky theme. I can learn now. I have a network seeing how you can activate, back propagate, get the delta small, and move forward. So in doing this, I had to, when I'm thinking about Elixir, because I was in a group with JavaScripters, Python developers, Ruby developers, everybody had their own approach. Everybody had their own, the way they do things, their own libraries. And so I was like, OK, with Elixir, I was like, OK, I need to approach this in my mind. I was like, I want to approach it the way I think I would do it in Elixir. So you can think about data transforming data, functional, I got that in my mind. So I was in a little bit of different kind of crowd where I was maybe the only functional programmer in that group. And so I was like, well, I hope I do this right. I hope I can set this up OK. And in the meantime, I'm just going to learn a lot. And that was really the goal of the whole thing. So learning Elixir, and you'll probably hear it a lot while you're here, I had some initial thoughts going in. And I had really one major thought going in. And that was just transform all the data. And I know that's a really big thing in the community. And you hear it all the time. You read Dave Thomas' book. And it's really important. And so transforming data, I think it's a really important skill to be able to do when you're programming Elixir. So I started out writing some code just to kind of let you know my progress through this thing. And I set up some structs. Nothing exciting there. But I wanted to represent these in the system, a neuron, a connection, and a layer. And so like I visually showed you, a neuron represents the core piece that gets influenced by some connection weights. And then a layer is just a list of neurons. And then your input layers communicate to the next layers, communicate with your next layers to the output layer. And so not too bad. I move on. I can then, sure, I can construct list, I can build list of structs. Again, I'm not, this is nothing mind blowing or anything, but I'm building up these structs representing these things within a neural network. I was like, okay, well, how am I gonna connect my neurons? So all right, I can transform lists of structs. That sounds fun. I can just build these connections up. So every neuron's gonna get a list of incoming connections and outgoing connections and build them up from there. And so I can take two neurons, connect them, transform the data that's in the neurons, return them, use them. This was awesome. I was, this was very straightforward. I thought, furthermore, I can activate my layers. So I can retrieve a layer, iterate through the neurons within the layer and activate each neuron, mapping the new neurons and updating the layer and so forth. So I'm transforming data. That's what we're supposed to do, right? And it's been awesome and I'm really enjoying this. And I'm like, yeah, I am connecting neurons and so forth, but I still need to connect my layers and I still need to build a network. I didn't get that far, okay? I hit a wall, okay? I hit a wall as soon as I got to some of the code that I just showed you. I hit a wall because connecting layers of neurons was actually hard. And transforming a graph set of lists filled with neurons related to connections was conceptually on the surface like, oh yeah, I'm just gonna do it. But technically implementing this is really hard. It's, in fact, it was very, very difficult because there's so many related things in here, the connection weights and the neuron data, incoming and outgoing neuron lists. There was many pieces of data that were related to each other. And so I hit a wall and just to reiterate the point, assuming I have a layer with one neuron in each layer, what was happening is the data being transformed was also being duplicated. It was being duplicated and the data was getting out of sync. And it became more difficult to transform later. So my initial creation of layers and connections was fine, but as soon as I started wanting to transform and update values, it became very difficult because I had duplicated connections and neurons all over my code. Because remember, I was just generating structs and then copies of structs and then copies of structs. And I was duplicating those and transforming that, ideally just was not happening for me. So I was creating code that looked like this, where I would iterate over some input neurons, some output neurons and trying to connect them. And what I noticed, and my code started to get smellier and smellier, I started leaving comments like refactor this. And at first I was like, okay, come back to this, do some accumulation, rework this. I started creating method names like build input layer neurons with connections. And the stank was getting worse. And I was like, I actually have a couple repos, I'll give you the links later. You can look at my disastrous attempts at this and then my reworked attempts at this. So I'll give you those links later, but this is a reference to my initial implementation of just transforming all the data. And I started doing this weird mixing of transforming and agents and I was using it all wrong. It was really, really bad. And it just got worse. And you see more ugly comments like simplify this, reduce, do some things here. This code is, I'm leaving it small and unreadable on purpose because number one, I don't really want you to see how ugly it is. But number two, I just want to just visually show you that if you see large functions and long lines and this accumulation of, I think are just really bad smells and code. And again, this is just me, again, I'm, you know, for the first time ever building a neural network from scratch. So I do expect some ugliness, but I just wanted to demonstrate and show you that, you know, be careful of smelly code like mine that I wrote and with some to-dos in there to simplify. So this was painful. And I really did hit a wall. And I actually felt it crumbling down because all my happy little JavaScripters, Pythoners and Rubyists were humming along, building up their stuff. I'm like, and I was falling behind because I was trying to keep up with the code. You know, I was basically building a kind of a house of cards where it just wasn't sustainable. So then I came across an article that Joseph Yelim wrote on how I start for Elixir. And I don't know how many of you've read that. Okay, it's awesome. It's really good. And I took away one big quote from it. When we need to keep some sort of state like the data transferring through a portal, we must use an abstraction that stores this state for us. One such abstraction in Elixir is called an agent. And so I was like, okay, maybe I'll look into this further. I did mention I tried to use them for something else, but I used it totally the wrong way. So after reading that article, it did spawn some good ideas for me to so much so that I retired the original GitHub repo to like a, you know, I called it version one, but it's really just an outdated something to learn from. And I created a brand new repo from start, from fresh, from scratch, another from scratch. This time with just a different mindset of, you know, how can agents help me organize a neuron connections, layers, and the data that they share between each other because it actually starts to grow quite a bit. So each neuron connection layer network was able to be represented with a PID that I could reference. And so that made it much more simple for me to have a place of truth that when I reference a neuron, it's not, you know, one of three copies or an outdated version of that neuron because a connection can have a reference to a neuron, a layer can have a reference to a neuron. And like, you know, that was what I fell into before. So I wanted each guy to have a one place of truth. And so PID management became a thing. So I had to make sure that I was keeping track of my PIDs, which again, it was pretty straightforward. I was able to do lookups, transform that place of truth and then be able to preserve that. And so that was big for me. And that basically opened my eyes to being able to much more simply keep track of these pieces of this network, transform them appropriately and then to preserve that for future activation and back propagation cycles. So I've updated my description earlier by saying data is being transformed but not duplicated. It stays in sync and thus becomes easier to transform later. Okay, and so the source neuron, the target neurons, the connections are all defined in access via a PID. So that was differing from copies of structs earlier. So connecting layers became easier. And I was actually able to move forward and build out the rest of the network in a pretty straightforward manner. So connecting a layer pretty much just became that. I was able to reference the source of truth for these layers and neurons and we was able to easily connect my neurons within the layers. Whereas before I showed you like three slides of really ugly looking stuff because I was trying, I had my mind set on that. It basically turned into just this. So simplified things tremendously, tremendously. And I went from crying to happiness. That's good, or big smile. So to me, transformations are still very, very important. Remember going into elixir and learning elixir, big thing was data transformation. But I made some definitions for myself about transformation. I consider, I have a couple of things that I've defined for myself, macro transformations and micro transformations. To me, I started out with transform all the things, but transforming a neural network is hard. And I ran into that. And so trying to transform a whole neural network is very, very difficult to do. However, micro transformations, list updates, updating structs and so forth. Pipe operator, chaining is beautiful. That's awesome. And that's what I'm recommending. That's what I'm encouraging you to use liberally. And I'm defining that with reconstructing manageable data from one form into a more meaningful form, given the context that it's in. So the key words are there. The key words are manageable. So a neural network is a big creature. To me, it was less manageable. So make sure it's manageable. And also given the context that it's in, you're massaging the data into something that makes sense for the context that you're in. So that's something that I learned along the way. All right. So getting to training the network. Training the network is that whole big loop of activating your network and getting that error rate, delta rate smaller, back propagating in that process is training your network. So I can set my network up and basically start the network, feed it some input data. So some input patterns and some input patterns that commonly used for some things that I was doing are like ORLogic and NAND, XOR, or some pretty straightforward, proven pieces of input that we know the expected output. And then I run the network and go ahead and train it. How many epochs or iterations do I want to run this through and let it go to town? So some example input that I mentioned, ORGATE, XOR, and NAND are things that we can kind of relate to right now, more achievable problems to solve. And that's I organize those. Another one is an Iris flower data set which is real common in the machine learning world as well, which if you want to read more about that, you can check out the URL, but my network did learn this pattern set. So I can give it a set of inputs and get to learn the outputs over time. Which is pretty cool actually. So this is the code that actually executes the learning of the network. So I'm gonna iterate however many epochs we defined and notice the two actions you'll see at the beginning of this reduced function is activation and training. Activation is the output generation from left to right. The training is the back propagation is that you want your neurons to learn the deltas so that it can get smarter. So if I run this through 10,000 times, my network eventually learns those logical patterns which is really, really cool. And so all I've done is I've accumulated the average error and then I just report the error so we can visually see that the error rate over the running of the training session, the error rate does go down. All right. And you can visualize that here. The error rate starts out, it goes down and then you can kick back and say, whoa. I know OR logic. Okay, I trained my computer to learn the OR logic. All right. Some other elixir lessons and tips and just things that just thought were cool. Again, you'll hear that's a lot but the pipe operator is beautiful and this is an example of a micro transformation that I think should be used liberally and I think a lot of people are seeing that and are using that. So definitely a lot of beauty, love that but this is the code that iterates over my hidden layers and trains so that's the back propagation. Furthermore, one version of pattern matching being able to overload your functions so that depending on the input that you receive, you can really clean up and make nice, concise functions some Ruby developer was asking me, what would I do in Ruby? And you probably would have one method with a bunch of conditionals in it looking for different types of inputs and branching in your function whereas in elixir it's just more common to break those up into nice meaningful functions that do the right thing via pattern matching. One other thing I think is overlooked sometimes is that if you are going to create a project is the documentation that you can use in elixir and they built that in from the beginning so if you add some documentation to your functions to your public functions and then get all the documentation stuff set up you can generate documentation with your code and you get beautiful looking documentation like that. And then hex has a way to organize that and upload that with your hex project so definitely something I thought was just cool when I was working through what I was working through. Another thing if you're doing any TDD is I use this mix test.watch just lets me run my test continuously. Alchemist is an elixir Emacs thingy that I think is really, really great and if you're looking for I know a lot of people are looking at different editors using Adam or Vim or Emacs. Alchemist is kind of something that kind of swayed me towards the Emacs way where I've been a long time Vim user but it's been awesome. I've actually incorporated into or use it within SpaceMax just kind of a hybrid of Vim and Emacs so definitely for those learning elixir you want extra little cool tools like this so definitely look into that. And I think being able to finish up that neural network and elixir and demonstrate that to the dev co-op group that I started with was actually a good thing because I kept bringing up different concepts like pattern matching and the pipeline operator and transforming data and actually got the crowd so interested I guess they asked me to come teach functional programming to do a series of functional programming sessions to the group using elixir. So I'm gonna be able to, I guess, share some more elixir love to a group that didn't even know elixir existed before. So I'm able to do that and I'm pretty excited about that and if you follow me at all on Twitter or anywhere I will definitely be posting slides, code, that kind of stuff if you wanna follow along. My code is up on GitHub. I'll post my slides again on Twitter so you can have access to that. In closing, I do wanna thank users testing. They provided the shirts that were out there earlier so you'll hopefully got one of those. I wanna thank them for sending me over here to do this and thank you very much.