 I want to talk a little bit about teaching computers how to play games. This is something that we as an industry have been doing for years and years and years. Gameplay has been one of kind of the staples of artificial intelligence research for years. And we learn a lot from teaching computers how to play games. Now I'm going to keep it pretty simple because I'm not like a deep expert in this area but there's some really interesting stuff you can learn from gameplay that helps with other problems as well. So computers can play all kinds of games, they're simple games like tic-tac-toe. This actually was a good game to learn some interesting algorithms on years ago. Now I've actually seen it being used to teach new programmers. One of the first problems they are given is write a tic-tac-toe game, a game that can play it. Then you can get into some more complicated games, like some of these you might recognize a few of those. Chess in particular has been a field of research for years and years and years and it wasn't until maybe the last 10 years or so that computerized chess players could get as good as humans. You might remember Deep Blue playing against Gary Kasparov years ago and finally actually beating him. It's also interesting that human Grandmaster players will actually play differently against a computer because they're trying to find flaws in the algorithms. You can't beat computers the same way you can beat other human players just because of the differences in them. So that's kind of interesting. Another game that's still not completely solved is the game of Go, which I've never played, but actually just in the last year or so somebody announced that they've solved 18 by 18 Go I believe and 19 by 19 is kind of the standard game. So it's getting closer with that with computing power and new techniques and algorithms. So gameplay is pretty interesting. You can get into even more complex games or maybe not. If you've never seen WarGames you have to see WarGames. That's where all the screenshots are from. I had to watch it again just to make sure I was all up to date on all the references and get some screenshots and stuff. So when you're writing a computerized game player, one of the things you'll run into fairly quickly is trees and graphs and algorithms for traversing those. And those are obviously interesting for game playing. They're interesting for maze solving, but there's also business problems that you might get into with graphs, hierarchical structures like organization charts. I could probably give a whole other talk on representing tree-based data structures and rails because I had to do that for a project and there's a lot of material out there on that. Relationships maps. When LinkedIn tells you that you're two or three connections away from somebody, that's a graph traversal. That's basically what it is. Trip planning. How did I get from Medford, Oregon to Austin? Well, I had to make a couple of hops and that's a graph. Network routing. How do my packets get from here to there? Graph problems again. So those are all really good practical applications, but it's Saturday. So let's talk about game playing instead. It's much more fun. To illustrate what I want to talk about, I'm going to use a board game called Ricochet Robots and I actually brought it with me. If you're interested in checking it out or playing it with me later, hit me up at the break or tonight. It's a super fun game. I was introduced to it at Ruby DeCamp a couple of years ago and kind of fell in love with it and immediately started thinking, well, how could I write a program that would play this game? And so then I proposed a talk on it, so I was forced to write the program, which is maybe not the best way to find side projects because it kind of puts a really tight schedule on you, but so it's been a lot of fun. So in Ricochet Robots, you have a board that's a 16 by 16 grid and there's those colored shaped cells on there that are the goal cells. And then you have a set of five robots that are randomly placed around the board and then you turn over a little disk that has one of these symbols on it and you have to try to move that color of robot into that cell in the fewest moves possible. Robots can only move in straight lines and they can't stop until they hit something, so they have to run into a wall or another robot. And you have to figure out a sequence of moves of any number of robots to get the active robot the color you're trying to solve for into the goal cell. So in this case, if our goal cell is the green square and we need to get the green robot in there, there's actually a seven move sequence that will get him there. And what you do, you do this in your head. You can play any number of players and when you come up with a solution, you say, oh, I can do it in 10 moves. And everyone else gets about a minute to try to beat that. Okay, so in this case, the solution is seven. We move the blue robot around and then we move the green robot around behind it, bounce off the blue robot and into the goal cell. So that's basically how the game is played. Now when you're gonna write a solver for a game, you kind of have to get a sense of the scope of the problem you're dealing with. So in this case, we wanna look at how big of a problem are we talking about? Well, it turns out there's something like 976.5 billion possible board states. There's 252 cells not counting that center island, which is always there. And five robots and you do the combination or permutation math for that and you end up with that many states. That's a pretty big state space. We're probably not gonna search all of that in a reasonable amount of time, if we actually wanna play the game. The other thing that's interesting is from any single board state, how many possible moves are there? And it turns out there's anywhere between nine and 20 possible moves for any board state. A lot of times the robots are down in the corner so they can only move in one of two directions. The robot you just moved is not allowed to reverse direction so that takes away a move. But if you're just starting and all the robots are kind of in the middle, they can each move in four directions or five robots, that's 20 moves. So it's a pretty wide branching factor. A lot of chess positions will have about a 20 branching factor, I believe. And so that's kind of the size of the problem we're dealing with. And so we're gonna need to come up with some algorithms to reduce that down to something that we can actually solve in a reasonable amount of time. So the first thing we have to do is figure out how are we gonna represent the board? Well, we start out with the basic grid. That never changes, that's constant. As many games as you play, there's a 16 by 16 grid with a 2 by 2 center island. That's always there. So when we're talking about all these states, we're gonna have a lot of these states in play at one time, kind of a parallel universe kind of thing. And so you wanna reduce your storage requirements as much as possible for representing the different states. So the board is fixed, we don't need to copy that all the time. We can just keep one copy of that. Similarly, for the duration of one game, so one game is basically solving for each of the 17 goal cells in turn. The walls and the goal cells are fixed as well. There's different board configurations so you can play different games with different layouts. But for one game, those are all fixed. So again, one copy, we don't need to copy that all the time. Then there's the actual goal, that changes for each turn but then it's constant for the duration of that turn. So we need to solve for the green square and then we need to solve for a different one and so on. And then there are the positions of all the robots and those change every turn. So the things that are variable in this game are the robot positions and the goal cell and the walls targets and the board are all fixed. And so we can use that to reduce the amount of storage we need, the amount of memory we need to use for our solver. We also have to figure out how we represent robot movement. And the way I've chosen to do it, and it's pretty standard in game playing, is represent the board states as nodes and robot movements as transitions between the states. So when I'm in a particular state, I can move the red robot right or left. I can move the green robot right. And then from those states, I can maybe move the red robot down or the green robot right and so on. So the moves of the robots are transitions between the states. And so what we end up with is a data structure known as a tree. And for some reason, in computer science, we draw our trees upside down, I don't get it, but that's what we do. So the very top is called the root node and it has no parents. And it can have any number of children. In our case, it'll be between nine and 20 probably. And then the next level down, every other node in the tree has exactly one parent. Nobody has two parents in a tree. And that's what defines a tree data structure. And in order to reverse a tree or navigate our way through a tree, we need to use a search algorithm. Now, James Buck, how many of you know who James Buck is? Okay, about half of you. He's kind of been out of the Ruby community for a little while, but he's pretty well known from previous days. I'm actually relatively new in the Ruby community myself, so I didn't know him. I finally met him at Mountain West Ruby this year, which was really cool. He wrote a series of blog posts as well as published them as a book called Three Ways Through, which basically is all about search algorithms. It's super accessible. It's written as kind of a story, fable type thing. And it's a really, really great introduction to search algorithms. So I highly recommend, if you're interested in this at all, check out his book because it's a good reminder for me to kind of see these algorithms. So I'm going to talk about a few of these algorithms today. The first one I want to talk about is kind of probably the simplest one, which is called depth first search. And in depth first search, what you do is you start at the root of the tree and you go all the way down one branch until you hit the bottom. And then you bounce up a level and then back down and up and down. So you're going down the depth of the tree first. And so what it ends up looking like is this. So we're going all the way down the left branch and then just moving across the tree depth first. Now, because Brandon Hayes is my hero, I also took all the code out of my talk today too. Seriously, Brandon's awesome. I love him, great guy. But now I decided, I'm speaking after lunch, you guys don't want to be staring at code all day. So I just illustrate the algorithm for you. But typically implement this with a very simple recursive algorithm. It's pretty simple code. But there are some things you have to watch out for. So first one is a cycle. So the green robot is up in that corner there. And if all you're doing is blindly following transitions between the nodes, you're going to end up going around in circles like this. And so you have to watch for cycles like that. And the simplest solution to that is you keep a list of all the board states you've already seen. And when you run into a state you've seen before, just stop, you're done. I got into a cycle here. But with the depth first search, you have to be careful because sometimes you can see something that looks like a cycle that isn't. So on the left here, the robot got to this little cell in four moves, and then later on on another branch of the tree, I found a way to get to that same cell in three moves. That's the same state. And so if I'm just throwing out states I've already seen before, I'm going to throw that away. But now I've just thrown out a shorter solution than I otherwise would have found. So you have to be careful with that. So what you really have to do is each branch of the tree has its own list of states you've seen before, means you're doing some extra work because it's not good. But the real problem here is that we actually don't have a tree. This is really a graph. Some nodes have multiple parents. There's multiple ways to get to the same board state. And so you have to watch for that. So it's really overly simplistic to represent this as a tree because it's really a graph. There's a few other complications. All these search algorithms I'm talking about are really designed to find the shortest path. At least that's probably what you're looking for. Except in this game there are some short paths that are illegal. For example, if you start directly in the goal cell, that's not a legal solution. You have to get out and get back in. The rule of the game is the active robot has to change direction at least once. So you can't just go down and back up again. You have to go, you have to turn left or right at some point. So this is not actually a legal solution, even though it is the shortest possible path to the goal cell. Similarly this one, the active robot is one away from the goal cell. Well, if I go straight into the goal cell, I haven't ricocheted yet. I haven't changed directions. That's not legal. And so what happens is I go around here. That looks exactly like that cycle we just saw a few minutes ago. Robot is right back where it started, same state. If we're taking away visited states, we're gonna stop here and say, oh, I've already been here, I'm done. But really the solution is to move directly into the goal cell from there. Five move solution, that's the best solution. And so you have to watch for these things. I thought I had this nailed and then I was playing some different algorithms and started traversing states in a different order and was missing optimal solutions because I didn't actually have this problem solved yet. It's really actually a tricky problem to get all the weird cases. So these are just some things with this particular game that you have to watch out for. If you're looking for like maze solutions, shortest path is fine. There isn't these extra rules that you have to worry about. So depth first search runs into these complications. Other algorithms run into some of them as well. But the real problem with depth first search is that you might go to like 20 or 25 levels deep in the tree when there's a four move solution sitting over on another branch and you just haven't got to it because you haven't explored the depths of the other tree. And you really have to search the entire space, all 976 and a half billion states to make sure you've actually got the shortest solution. So depth first search is really not the best algorithm for this. So there's another one called breadth first search that is much more suitable. And at breadth first search, instead of searching all the way down the branch of a tree, you search across the levels of the tree. So we start at the root and then we go down a level, search across that, down another level, search across, down another level, and search across. Now this algorithm, the way you'd normally implemented is with a loop and you take the starting state and you generate all the successor states and put them into Q and then you pull the first state off the Q, find all the successors, put them on the end of the Q, stop when you find a solution. The nice thing about this algorithm is that as soon as you find a solution, it's guaranteed to be the shortest solution because you've already searched the shorter solutions and you haven't searched anything longer yet. So this is a much better algorithm for this kind of problem because you stop as soon as you find the shortest solution. Again, you have to watch out for the complications I talked about. The other nice thing is that the visited list can now be global for all the branches because if you get to a state you've seen before, by definition you've already, you got there in the same number of moves or fewer moves than before because you haven't searched anything longer yet. So this is a much better algorithm. It still wasn't fast enough when I implemented it. And so I'd start looking at some optimization. So there's basically two main ways to optimize and I'm through in a third here. So the first thing we can do is we can do less things. This seems pretty obvious. If you want to optimize something, don't do as much work. This is usually where you get the most bang for the buck. The second thing is to do things faster. So for each state that we look at, let's do less work or do it faster. You can get some good gains here but eventually you're gonna hit a kind of a wall or at least diminishing returns because you have to do some amount of work to solve your problem and eventually you're gonna kind of get that as fast as it can possibly be. So most of the wins are gonna come from doing less things but you can still optimize the amount of work you do per state. And then I'll talk a little bit about heuristics which is a big fancy word that basically means a rule of thumb. And so what you can do is you can, looking at your specific problem that you're trying to solve, you can say, well, I kind of know from the nature of this problem that this rule will generally work. It won't always work. There'll be some kind of corner cases where it doesn't quite work but it's pretty good. It gets me kind of where I wanna be. So I'm gonna take a little bit of a break to help you work off lunch a little bit. Can everybody stand up for one second? That's why I wake everybody up a little bit. I do not have the most entertaining voice in the world, so. All right, so I'm gonna teach you, we have this thing where I work at ZL called, we give high fives all the time. So a couple of my coworkers have given a lightning talk on giving a stellar high five. So I'm gonna teach you the trick which is when you go to give somebody a high five, you get your hand up. And instead of looking at their hand, you look at their elbow, okay? Now don't put your hand through somebody. Everybody turn and give somebody a high five, okay? Awesome. All right. There you go. All right, gotta get you back now. So we actually put up a fun little site a while back called stellarhighfive.com and you can go there actually and give somebody a virtual high five on Twitter. So if somebody's done some work that you think is really cool, you can go to stellarhighfive.com and give them a virtual high five. I have a few stellar high five stickers if you want one. And you all earned one because now you know how to give a stellar high five. All right, so back to the talk. So I'm gonna take you on a little journey through the optimizations that I did when I was working on this algorithm. They are not necessarily in any particular order. They're in the order I did them in when I was working on my solver. So the first thing I tried was a heuristic. When you're playing the game, obviously the very last moves you make to win that turn is to move the active robot into the goal cell. So what if we explore the states where we're moving the active robot first? So I tried that. This is a sample game. This is the one I use for all my performance measurements. So 17 goal cells and the blue line with the triangles is the original algorithm I had. The red line with the, with this, sorry, blue line with the circles, red line with the triangles is with this heuristic in place. And you can see most of the time the red line is at or below the blue line. So it's a little bit of an improvement. Most of the time the places where they're exactly equal are when the original algorithm was choosing the active robot first anyway. There's that one weird outlier case. It's like the hardest one to solve on this whole game that I was playing. And the heuristic did not help there. And that actually threw off all the numbers. So this is what I mean by heuristics. They don't always work. So most of the time it worked. In one case it just didn't. I kept this heuristic in place for the rest of the iterations I did because it seemed like a good heuristic to try. But it didn't work all the time, like most heuristics. So then I looked at a way of doing less work. And what I realized I was doing is I was doing the breadth first search algorithm. I was generating the successor states and putting them on the queue. But I wasn't actually looking at them to see if there were solutions until I pulled them off the front of the queue. So if my goal cell or my solution is at node 16 there, I have to go through all 16 nodes to find it. But if instead when I generate the cells, look at them to say, hey, are you a solution? I can stop when I get to the sixth node and I can save myself 10 states there. Now imagine a tree with branching of nine to 20 branches at each level down nine or 10 levels. That's a significant amount of work. And that's what I mean by doing less work. Just basically finding a way to not do nearly as much as you were doing before. And it turns out that that was about a factor of three reduction in the number of states I had to look at, at least for my sample game that I was playing. That was a big win. Then I started running a profiler. And when you're doing any kind of performance work, profilers are your friends. You want to be measuring because most of the time our intuition about what's slow in our program is wrong. And so measuring is good. Eileen Yuchitel has been doing a talk this year called How To Performance. If you're interested in performance at all, I recommend that talk. Eileen is really cool and she does, she's doing some really good work on Rails. If you haven't heard of her before, you probably will soon because she's just doing some amazing things. So when I ran the profiler, what I found I was doing is spending most of my time figuring out where the robots were gonna stop. And when the board is fixed, the robots are gonna stop in the same place every time unless another robot's in the way. So I started pre-computing the stopping cells. So for example, with the green cell where it is there, if there's no other robots in the way, it's gonna stop in one of those four places. And so I could keep track of that for every cell on the board, if I'm here and I move right, I'm gonna stop here. And then you have to do a little bit extra work to see if there's a robot in the way. But it turned out that that was actually a pretty big speed improvement as well. This is how many states I could process in a second. So it's a pretty big jump just by pre-computing some information. And when you're doing optimization like this, one of the things you'll find, especially with these kind of algorithms, is a lot of times you're gonna be trading off space and speed. So you're gonna take up more memory in order to make your algorithm faster, or you're gonna slow down your algorithm to make things take less memory. And it really depends whether you're time constrained or space constrained, which direction you go. Sometimes you can get a win that you reduce memory usage and speed at the same time, but a lot of times you're trading those two things off. So I traded off a little bit of memory to store these pre-computed stopping points, but I had a huge gain in speed for doing that. So that was a good win. The next thing I realized by actually looking at somebody else's talk who also wrote a solver for this game is that when I'm moving the green robot, I really don't care what color the other robots are. And so these two boards here, all the other robots are different colors, but they're in the same positions. I can consider these two states equivalent because I don't care what robot I bounce off of. It doesn't matter what color, it just doesn't matter. So those two states can be considered equivalent. So this is a way of doing a few less things anytime I can run into the states that are equivalent. But it turned out in order to implement this, I also had to come up with this concept of what I call the board equivalence class. So some representation of a board state that I could compare easily to say, oh, these are equivalent. And so that turned out to actually be a win. I wasn't comparing full board states anymore, I was comparing something much simpler to compare. And so this turned out to be an optimization of doing things faster as well. And so I was able to process a few less states to solve the game, not a whole lot less, but a few, and also speed things up a little bit. So that was a pretty good win as well. Again, you can kind of look at your problem and look for ways of simplifying things based on the particular problem you're solving. The next thing I did is the board equivalence class, what I did is represent each robot on the board as a number, and I put those in a set because I didn't care. I just wanted to know whether they were the same numbers or not, they didn't care about order. So I put them in a set, and I was comparing, so when I was comparing board equivalence states, I was comparing sets. Well, it turns out comparing sets, at least in Ruby, is slower than comparing arrays. So I changed it to a sorted array, so it was an array with five numbers in it, didn't take that long to sort. I mean, I was paying the cost of the sort, but I was comparing them a lot more than I was computing this equivalence class, and that compared a lot faster. So I sped things up a little bit more. So for small arrays, in Ruby, sorted arrays are actually faster than sets, and I got that by measuring. I tried both ways, measured the difference, and it was faster, so again, always measure, always benchmark. The next thing I did is I realized, or my profile I was telling me, that I was creating too many objects. So I was basically creating another copy of a board state when a robot couldn't actually move. So if I try to move left and there's a wall in the way, that's not a new board state. So I could return the same board state instead. That was a big win. So keep an eye on what you're creating for objects, and maybe you don't have to create quite so many objects. Ruby's pretty good at that, but it will slow things down. If you're doing work, you don't have to do, that's a big deal. So again, do less work, and you're probably going to be faster, obviously. You still have to do the job you're trying to do, but if you can find a way to do less work while you're doing it, it's a good win. And the other thing I found is that there's places where I could compare objects for identity, rather than like a deep equality, and that was another huge win. So I was kind of saying, oh, is this robot the same color as that one, and is in the same position as that one? And it turns out if I was smarter about how I generated new robot positions, I could use equality instead, and that was faster. So this is where we're at so far. This is the total solving time for this 17 move game. This, the board that I was using for these tests is actually a pretty complicated board. I've tried playing with my coworkers a few times, and we kind of all curse it, because it's a pretty tricky one. This program will play the whole game in three minutes. I've had games that I've kind of randomly generated where the program will play it in like 30 seconds, which is unbelievably fast. I can't beat it. I can tie it, because there's a nice tie-breaking rule, but yeah, so it's pretty fast, but there are some moves, like if you get to like 12, 13, 14 move solutions, program's probably too slow still, so I'm still working on optimizing it. And I did want to work on some new algorithms. I wanted to try some better algorithms to make this fast. This is pretty good, but not where I want it to be yet. So I want to talk about a third kind of searching algorithm known as a best first search. Now this is not magic, although it can feel like it. So if you can somehow figure out which states or which nodes to explore using some rule, then you can choose your way through the tree a little bit smarter. So it might end up looking something like this where we're kind of jumping all over the tree. We might go down four levels before we look at some two-level solutions. But in order to do this, you need some way of determining what is best, like what's the next best state to look at? How do you do that? Turns out the most efficient algorithm for that is something known as the A-star algorithm. There's an earlier algorithm called Dijkster's algorithm that I won't go into. A-star is kind of an evolution of that. And so in A-star, what you do is each state gets a score. And then the states you know about so far, called kind of the fringe, the states you're just about to explore, are put in a priority queue, which is a data structure that sorts everything by some priority value. It's used a lot in scheduling algorithms and things like that. But what you do is you sort these by lowest score. So whatever state has the lowest score comes first. So if you choose a scoring algorithm that says, I want the score to be the length of the path that I've traveled so far, that actually just evolves into breadth-first search because we're always searching the shortest paths first. So you can see breadth-first search as kind of a specialization of best-first search. But in the A-star algorithm, you factor in one extra piece of information. And the score is the distance you've traveled so far plus an estimate of how much further you have to go to get to the end, okay? So if I've got one state where I've moved two moves and I estimate it's gonna take another seven to get to the end, well, that's a total of nine. That's the score. If I've got another state where I've moved four moves, but I estimate only two more to go, that's the score of six. I'm gonna explore that one first, even though it's further down in the tree than the other one. Now A-star has a couple of conditions for it to work properly. Actually, one main one is that your estimate of how far you have left to go cannot be too high. So if I estimate I've got four moves to go, I better not have a solution in two or it's not gonna work, okay? So that's the one criteria that your estimating function has to satisfy is you cannot overestimate. There's another property, I forget what it's called, but essentially if you can, I forget what it's called, but it turns out that the algorithm I came up with has that property as well. What that means is when you have that property, the first solution you find is the end solution, otherwise you might have to consider a few more after that. So it's a little bit of an optimization. So how would we apply this here? How can we come up with an estimate of how far a robot has left to go? One idea is that assume that robots could stop wherever they want, then figure out how far the robot would have to go to get to the goal cell. So let me show you how that works. Start out, the goal cell is obviously a zero, takes no more moves to get into the goal cell. Anything you can get to in a straight line from there is a one. So everywhere that I can get to the goal cell in one move, any cell where I can get to one of those one cells in one move is a two, and then threes, and then fours, and then fives. And you go until the board's filled up. And so what that gives you is a map of the optimal number of moves that would take the robot to get to the goal cell from where it is now. So there's a couple of places on there that are fives. There's no way to get to that goal cell in less than five moves, and that's if you can stop exactly where you wanna stop, which obviously they can't. So this is clearly not overestimating. This is best case scenario. If there was robots in all the right spots and I could bounce off them all, that would be my best case. So that's what I used for scoring function. And it turns out that this ended up not being faster than the best first search algorithm I had. It's close. The best first search algorithm, like I said, can play my test board in about three minutes. The A star variants I've tried have been three and a half to four. It's a little bit slower, which surprised me a little bit. I thought this would actually help. So I think I need to, there's probably a lot of states that are at the same priority level. So I need to actually find a better way of differentiating those and come up with. So that's as far as I've got with the solver so far. It's been a kind of a fun problem to work on. I'm still working on it. I'm still trying to make this thing faster. It's kind of cool. I've got some ideas for where to go next. I could look at some collision detection algorithms so that figure out where robots are gonna run to other robots that might be faster than what I'm doing. Maybe try working backwards. When you're playing the game, you look at, here's the goal cell. Where are all the places a robot could stop before it goes into that goal cell? Can I get the robot there and then in? So maybe there's a way to work backwards. I haven't explored that one very much yet. Maybe it makes sense to move the most recently used robots first. A lot of solutions to the game. You're only moving two, maybe three robots. And so if you optimize for only moving a smaller number of robots, maybe that would help. Or some combination of moving the active robot first or the most recently used robot. Maybe I could be smarter about choosing which direction to try first. Like maybe I wanna try moving up first or down first. That may not work very well, cause a lot of times you kinda have to go up and around to get down somewhere. But there's some ideas there. Maybe there's a way to compute more, pre-compute more stopping positions based on where robots are. Make less objects, use more primitive types. That might speed things up. Parallelism is an obvious option. You explore multiple branches of the tree at one time. I haven't played with that too much yet. I could port to a different language, but why would I wanna do that? It's a Ruby conference. So I haven't even really thought about that. So that's kind of where I'm at with the solver so far. And hopefully some algorithms you maybe haven't seen before. Maybe you can think of some ways that you can apply those in your problems. I do wanna thank a couple of people. Trevor Yarsh, who's one of the partners where I work and the designer there did all the slides for me, except for the ugly parts that I added after. So all the stuff that looked nice, the animations, that was all Trevor, did a great job. And everybody else I work with and see all, I had people pairing with me on the solver, giving me ideas, helping me with bugs, things like that. Michael Fogelman is the guy I referred to earlier that did another talk on this game. I got a couple of optimization ideas from him. And I have to give a shout out to Trevor Lailesh Manaw. He's the guy who introduced me to the game at Ruby D Camp. Really cool guy, and so I wanna thank him because this game's a lot of fun. And of course, all the screenshots are from WarGames. If you haven't seen the movie, it's kind of fun. Matthew Broderick, Allie Sheedy, 1983, hacking, phone freaking, all kinds of fun stuff like that. I actually have code for the solver up on GitHub. If you wanna see the depth first search algorithm, you'll have to go back in the history because I threw it out pretty early because it wasn't a good solution. But the code's there in its current state. My site's there. I have a blog, like pretty much everybody does, except that I actually write online. Like I post weekly there, so welcome to check that out. I will have the slides up on speaker deck a little bit later today. I'm also on speaker rate. I'd love feedback because I wanna get better as a speaker, so any constructive feedback you've got for me, I'd love to have it. Thank you. I have zeal stickers too. If you don't want this dollar half half sticker, you might want a zeal sticker, so.