 Let's go ahead and get started. My name is Randy Coleman. I work at a company called Zeal. And we're going to talk about teaching computers to play games today. So computer game playing has been a fascination for years and years and years. And actually a lot of the things we know today came out of teaching computers to play games, a lot of algorithms that we use, data structures, search techniques, et cetera. There can be games as simple as Tic-Tac-Toe, which I'm sure we've all played. Teaching a computer to play Tic-Tac-Toe is not actually that hard anymore. Then you can get into some slightly more complex games that you might recognize, or maybe not. And then you can get into the really advanced stuff, which you've probably never, ever played before. If you don't recognize the screenshots, they're from a movie called War Games in 1982. Classic, Matthew Broderick, Ali Sheedy, when they were really, really young. Anyway, so games, like I said, have been an area of research for a long time. And you might recognize this more recent example. They taught Watson to play Jeopardy, and now they're using it for all kinds of other things. So what I'm going to talk about today is mostly really simple game stuff, pretty basic level, but it's at least a way to get started. And a lot of what game playing involves are graphs, trees, and algorithms. So why would we care about those? I mean, yeah, they're great for games, but what else? Well, we might have to represent hierarchical structures in our apps. I actually worked on a project six or eight months ago where we had a hierarchical structure we had to represent and work with and actually was able to suggest one of these algorithms to solve a problem that people had with that. Relationship maps, you know, that you are only two connections away from this person on LinkedIn, that's a graph. And there's algorithms for traversing those graphs. And no, I'm not going to ask you if I can add you to my personal network, don't worry. Trip planning, trip planning is a graph problem. Network routing, how do you get your packets from here to there? That's another graph problem and these algorithms, much more advanced versions of these algorithms can be used in these kind of places. But those are kind of boring business problems. Let's talk about game playing instead. So for my example, I'm going to use a board game called Ricochet Robots. I was introduced to this game a couple of years ago and as soon as I played it, I thought I bet you I could write a program that would play this game for me. And so like any side project, what you do is you have this idea and you let it sit and then you propose a talk on it and then you have six weeks to write the game player and the talk and get it all done. So that's my bonus lesson on how to side project. Don't do it that way. Anyway, so in Ricochet Robots, you have a 16 by 16 grid like you see here. The center four squares have a little gray thing on them that you can't go there. And on the grid, there's some walls and then there's those colored areas are called goal targets. And so a game of Ricochet Robots, you have to move the robots around the board and achieve the different goals depending on which turn it is. So the robots are distributed randomly about the board when you start. And then you turn over a little disc with one of the gold symbols on it and that tells you what you have to do to play. So this game can be played by any number of people which is really cool. Most games you have your own little token you have to move. And so everybody looks at the board and tries to figure out how many moves does it take to get the green robot into the green square. You can move any of the robots, but robots can't stop once they start unless they hit something. So they can run into walls or they can run into other robots, but they don't stop. And then they change direction and go a different direction. So in this case, to get the green robot into the green square, I can come up with a seven move solution which looks like this. You move the blue robot around and then chase it around with the green robot, bounce off the blue one and down into the goal. And that's basically how the game is played and then you turn over another disc. So there's 17 discs and an entire game is made up of going through all 17 discs. And you're just trying to be the person that can find the shortest solution. There's a little bit of a chance to find a better solution. You have about a minute to do that. And there's a tiebreaker rule so that if two people come up with the same length of solution, the person who's behind in the game gets to trump the other player which really helps even the game out a lot. Even if you're not the fastest player, you still have a chance to stay in the game. So it's a lot of fun, it's pretty cool. But if we're gonna teach a computer to play this game, you have to kind of get a sense of the scope of the problem you're dealing with. So in this game, there are actually 976.5 billion possible states the board can be in. So you've got the cells in the walls. There's five robots and 252 squares and you do the combinatorics math for that and you get 976 billion, that's a lot. And at each level from each state of the board, there's between nine and 20 possible next states you can get to depending on which robot you move in which direction. Sometimes robots are stuck in a corner, that's why there might be less moves, but nine to 20 moves. So that's a pretty huge amount of data and you're not gonna really get through all that in a reasonable amount of time. So you need to come up with some ways around that and we'll be talking about that today. So the first thing you need to do when you wanna write a computer game player is you need to figure out how am I gonna represent the game? What are the data structures gonna look like? So we have to start by representing the board. And if we think about the game a little bit, the board never changes. The board itself never changes. You have the 16 by 16 grid with the center island. That's static. Then you have the walls and targets. There are different combinations of those but for one single game of 17 turns, that never changes either. That's static as well. The goal cell changes each turn. So a little more frequently than the walls but less frequently than some other things. And then the robots, every time you move a robot, the robot positions change. So you've got this combination of really static information and really dynamic information. And if you have to keep track of a lot of states of the board at a time, you wanna minimize the amount of storage you need for that amount of data that you need to store. And so you wanna kinda split up what's static and what's dynamic so that you can reduce your storage. That's one of the things you'll find solving big problems is storage efficiency is often almost as important as actual time efficiency. And a lot of times what you do is you trade off, oh, I can use up a little more storage but then I can get faster. Or gee, I'm running out of memory on my servers and I can't add more. So I need to maybe trade off a bit of speed so I can reduce my storage usage. And that's classic in a lot of performance problems. The next thing we have to represent is how do we figure out how to represent robot movement? And what we can do is we can represent the board states and then the movement of robots as lines between them. And so here I've just got a few examples of moving the red robot a couple of directions, moving the green robot a little bit. And so you can think of the board states as these boxes with the lines between them being which robot moved to get to that new state. And this data structure, you may not recognize it or you may, is actually called the tree. It's fairly classic computer science data structure. For some reason we always draw them with the root at the top instead of at the bottom like a real tree. Don't know, it's like starting from zero, right? So in a tree, the very topmost node is called the root and it has no parent nodes above it and every other node has exactly one parent and that's what defines a tree, exactly one parent. And then at the very bottom, the nodes that don't have any children below them are called leaf nodes. And when you're working with trees, you need to figure out how to search through the trees. And there are a bunch of different algorithms for that. Trees are really good for like doing maze solvers as well and James Buck wrote a series of blog posts and turned it into a little book called Basil and Fabian Three Ways Through. It's a super accessible introduction to these algorithms. Goes through a bunch of different algorithms for solving mazes. It's cast in a fable, you've got a wizard and his apprentice and they have to get to a castle but they have to go through mazes to get to the castle. And so the wizard is teaching the apprentice how to solve the mazes to get through to the castle and a bunch of other adventures along the way. Great book. So if you're at all interested in this, I highly recommend that book. It's really cool. The first algorithm I'm gonna talk about is probably the simplest one to think about. It's called depth first search. And a depth first search, what you do is you start at the root and you follow one branch all the way to the end and then you pop back up and go down again. So that looks like this. So you go all the way down the left and then start moving across the tree until you get to the end. So that's a depth first search. Typically depth first search is implemented using recursion and that might look something like this. You don't have to super understand the code but I wanted to show it for people who are more interested. So I have a solve method that generates the initial state or a path from the initial state and then solves that recursively which builds up a list of candidates and then you go through all the candidates and find the shortest one. So the smallest by length. And if you don't find one then just return that there's no solution. And the recursive solution, you look at the path and if it's solved then you put that on your list of candidates. Otherwise you find all the allowable successors of that path so every possible next state I could get to from where I am now and for each of those make a recursive call to solve recursively and try again. And that's a recursive solution and that implements depth first search. Now you might notice that there could be some problems with this algorithm. For example, you can get into cycles on the board. So the green robots up there on the upper right and it can get going around in circles. And if you have a naive depth first search algorithm it's never gonna finish because that robots are gonna keep going around in circles and you're never gonna stop. So you have to build in some kind of cycle detection. So what I started with here for a lot, this is computing the allowable successors you haven't seen this code yet but look at all the possible moves, generate a successor for those moves, get rid of any nils and successor just says, oh give me the next robot which is the robot moved in a certain direction and generate a new path based on that. So you have to change that if you've got cycles to reject any cycles and the cycle is detected just by keeping track of all of the states you've been to before. Now notice the way I've done this here, the visited list is kind of new for each different path. And the reason for that is that you can get to the same path in two different ways. Now if I explore the path on the left first I'm gonna get to that place in four moves and then later when I explore the other path I get there in three moves but I've already seen that state so I'm gonna throw it out. I visited that state, I don't need to explore it again I think except that I just found a shorter way there. So that's why each different path I explore has to have a different visited list because otherwise you don't find the optimal solution, the shortest path. So really what's going on here is this nice tree picture I showed you it's not really a tree, it's a graph. And so a graph is like a tree except that nodes in a graph can have multiple parents. Okay, so it's really the only difference. There's different kinds of graphs I won't get into that too much but this is really a graph and so you can get to the same state from different directions. Now this algorithm with a cycle detector could be used to solve mazes pretty easily. James's book shows that pretty well but in this game ricochet robots there's some extra complications. For example, if your robot starts on the goal cell right away that's not a legal solution. I mean this is a zero move solution you're in the goal cell right? Now the rules say the robot has to ricochet at least once it has to change direction to the left or the right at least once. So that's not a legal solution so you have to be able to get out of there and back into the goal. And again that would look like a visited state but it's not because it wasn't legal the first time. So you have to watch for that. There's a similar one where the robot starts one move away from the goal and in this case the shortest solution actually takes you through the goal so you go around here. Now that totally looks like the cycle we just saw a few minutes ago but because it involved in a legal solution it's not actually a true cycle and you have to watch out for that. So different game playing algorithms will have different complications based on the rules of the game you're playing. Like tic-tac-toe is very simple. Maze solving is relatively simple if you're watching for cycles but this game is a little more complex because there are optimal solutions that are not legal solutions and you have to watch for that. And I thought I had this tapped in an early version of my solver and then I was playing with some different algorithms and found out I was missing known shortest solutions for some reason and it's because I did not have this condition right. I'm still not positive I've got it right. It's really tricky to get right. Anyway, so once you get back to that state then you can get in the goal and you win the game. So overall, depth first search is really relatively simple algorithm but for this game it's not great because you have to search the whole tree so that you can figure out which solution was shortest. You gotta go down to the bottom of all of the leaf nodes and then find which one is the shortest path and with 976.5 billion states not gonna happen. So we need a better algorithm. So the next one I looked at is called breadth first search and a breadth first search algorithm instead of going down one branch of the tree you'll go across the levels of the tree and that looks like this. So we look at the first level and then down one more and down one more till you get to the end. So that's a breadth first search going across the tree across the breadth of the tree not down the depths of the tree. That looks like this in code. You start with an array and the array is actually serving as a data structure called a queue which is like a lineup thing. First person in the line is the first person out at the other end getting served that's a queue. So that array serves as a queue and then as long as there's paths in that queue we'll take the first one off the list that's what the shift does. If it's a solution we'll return it. Now with breadth first search the first solution we find is guaranteed to be the shortest solution because we've looked at all the one move and the two move and the three move solutions before we get to the four. So the first time you see a four that's my solution shortest path. Good to go. And then if that wasn't a solution then you take all the allowable successes of the path shovel them onto the end of the queue and so you get to them when you get to them. And then if you get through that loop and there's no paths left to explore and you haven't found a solution then you have there's no solution. So the really cool thing about breadth first search is now we can have a global visited list because if I come across a state that I've already seen I know that I came across it in at most the same number of moves. It was either less moves or the same number because I've searched all the shorter paths already. So that's pretty cool. And that looks like this. You just make a set up at the top with all the visited cells and then you break out of the or you go go on to the next iteration of the loop if you find one that you've already seen and then otherwise you shovel it onto the visited list. So that's cool because now you can use this global visited list you don't have to worry about the case where there's a shorter path, which is pretty cool. But even with breadth first search I mean it worked pretty well but it was still too slow and that's when you have to start trying to optimize your algorithms a little bit. So there's basically two things you can do and I've added a third here that kind of plays into the other two. So you can either do less things or you can do things faster and you can use heuristics to kind of choose which of those to do and I'll talk about heuristics in a minute. So doing less things in this context really means can we process less of that game graph? Less states, can we just process fewer states? Can we find ways to do that and get away with it? Doing things faster would be doing less work per state or doing that work faster somehow. And heuristics really that's a fancy name for a rule of thumb. And the thing about heuristics or rules of thumb is that sometimes they work and sometimes they don't. You might find a rule of thumb that works almost all the time. If you're an Andre Arcos talk earlier you talked about bundler where they've got this really really hard graph search problem to resolve dependencies and they've come up with a bunch of heuristics that make it work really fast most of the time. There's still some degenerate cases you can get into but they've got a bunch of heuristics in there for making things faster and that's exactly what I'm talking about here. So there's times that heuristics will work great and there might be times that they just don't. Okay, so I'm gonna kind of take you through the optimizations I did in the order I did them in because I think it tells a better story so they're gonna be kind of jumping around between these three things a little bit. The first heuristic I tried was from playing the game I realized the last move you make in any turn is to move the goal robot into its cell or I call it the active robot, the robot that you're trying to get into the goal cell. So that's gonna be the last move you make and so at any level of the tree if we look at let's try moving that active robot first because if I'm like one step away from the goal I'll find it first if I don't have to look at all the other robots first. So I tried that heuristic so here I just before I start calling the solver I just find the index of the goal robot and use a rotate method to get that to the front of the array rotates kind of a cool method. And this is what happened I played all my timings I had one sample game and I ran through all 17 turns and the blue line with the circles on it is the original algorithm and the red line with the triangles is the new algorithm where the active robot moved first and you can see that for the most part the red line stays below the blue line the places where it's exactly the same are places where the original algorithm already had the active robot first so they're same algorithm in that case. There's that one outlier case where this heuristic just didn't work you can see the big peak where the red is separated from the blue that kind of skewed all the timings but like I said heuristic sometimes work and sometimes don't I never have figured out why that one was so bad just in that case it didn't work well. So the next thing I did I decided to keep that heuristic for the rest of the examples because it looked like it was slightly better in most cases. So the next thing I realized is that I was checking for solutions later. I was checking like when I start to process a path I'd check whether it's a solution so this is the code we saw a little bit earlier and so what that's doing is I don't find the solution at node 16 there until I get there in the search I've gone through all 16 nodes before I get there but if instead I look for solutions as soon as I generate the successors so like this I get the allowable successors I look to see if any of those are a solution and if they are I return it right away and then I put the successors onto the queue. So what that means is that I would find the solution at node 16 when I was processing node six. So even in this really small tree right here I was able to throw out 10 nodes that I didn't have to look at. That's pretty cool and it turned out to be a really big savings in time. So this graph is the total number of states I had to consider to play my sample game and it was almost a factor of three fewer states to consider. So that was a huge win because you just don't have to explore all the rest of the level you're on and all the next level to the point where you find the solution. So that was a big win. Then I pulled out my profiler and if you saw Leen's talk this morning which I unfortunately didn't but she talks all about how to profile and tune performance. I pulled out my profiler and what I found that I was spending way too much time trying to figure out where robots were gonna stop because basically what I was doing is saying let's try moving one. Can I move? Can I move? Can I move? Can I move? Can I move? That's a lot of work and so what I did is at the beginning of the solver I take the board and I compute that if there's nothing else no other robots in the way where would I stop? From any given cell where would I stop? Going in any of the four directions and so I've got what I'm doing here is I'm trading off space for time. I'm taking up a little more storage space to store this graph or this chart of stopping cells but that's gonna allow me to have more speed in my algorithm and I wasn't running out of memory so I was okay making this trade off. So for example the green cell right there I would say oh if I'm gonna move up I'm gonna stop at the top wall or if I'm gonna move left I'll stop at the other wall. I still have to look for robots being in the way but at least this part of it is much faster and certainly it was. This is the number of states processed per second and it was factor one and a half or something like that so I could process that many more states in the same amount of time so that was pretty big speed up as well. The next thing I realized I was looking at some other solvers online and there was a talk by Michael Fogelman and he made the observation that the only robot you really care about is the gold robot and the colors of the other robots don't matter at all. So these two states here the green robot is the same in the same position. All the other robots are in the same positions but they're different colors in those positions and I can treat these two states as equivalent because the only robot I'm caring about is the green one. It doesn't matter if I bounce off the blue one or the yellow one it just, I don't care. So these two states are actually equivalent and if I treat them that way then I can save myself a little bit of work. Well it turns out I was also able to process states faster because what I did is I computed for each board state something called an equivalence class. So some representation of which states, two boards that are equivalent by this rule would have the same equivalence class. And so what I did is I computed a set of a position hash for each robot and then for the active robot I just added a thousand so it came out as a different number. So for example I might have like, I don't know, a 131 and a 242 and a 15 and a thousand and one doesn't matter. I just wanted those two sets to be equal if they had the same numbers in them without caring about order. Anyway, so I computed this equivalence hash and that did allow me to save a little bit of work. A few less states, there's not a lot of savings there but some. But it turns out that comparing those equivalence classes which was faster than actually comparing the whole board state like I've been doing before. So it turned out to be do things faster as well because comparing those equivalence classes was faster. So that was kind of cool. And then another little mini optimization I discovered that if I take those equivalence classes and instead use an array and sort it. So I'm not sure if you can see the line there but I've crossed out the set new part there and added a dot sort on the end. So it turns out that comparing sorted arrays is faster than comparing sets in Ruby. Which if you think about it makes sense because in sorted arrays you can compare the first element, second element, third element and stop as soon as you find a difference. Sets are not ordered that way so they compare in a different order. And they can still short circuit but this was actually faster. Even normally sorting slows things down if you have to keep sorting all the time but I sort once and cash the result and do the comparisons a lot. So this was actually a decent savings. So bars are growing up again a little bit. The next thing the profiler told me was that I was creating a lot of objects that I didn't need to create. So in here at the bottom there there's a method called with robot moved on board state that returns a new board state with that robot moved in a given direction. And then I can ask a robot for give me yourself move to a new location. So this is very functional style code because I needed to be making copies of all the states because I needed to keep the old ones around to consider them. And so what was happening is sometimes asking for the robot to move in a direction might be right up against a wall already and I'd get back a copy of the robot in the same position it was already in. And I figured out I don't really need to be creating those new robot objects all the time. So what I do now is I figure out what's the next cell the robot's gonna be and if it's the same cell it's already in then just return self. Otherwise make the new robot and then I check for that in board state where if I get back the exact same robot object then I can just early out. That was a pretty big win. Creating less objects is always a good win. And then I also realized that there's some places where I could do identity checking instead of deeper equality checking. So identity checking is whether two objects are the identical same object versus checking whether they have the same values in their instance variables. And so right here where I could use the equal question mark method and that was faster as well. And again that was a pretty big win. So let me sum up where we are so far. Those bars have dropped a lot. The solver got a lot faster. The longest game in this, or the longest turn in the sample game I had was it's down to about just over a minute I think the whole game it plays it in about three minutes and seven seconds and it's a fairly complex game. I had games where it can play the whole game in 30 seconds, which is insane. I don't wanna play against it anymore. So that's kind of where I'm at now with performance. There's still some longer solutions that it probably wouldn't find in a reasonable amount of time and the human could probably still beat it. So I wanted to make it faster and I've been working on that without much luck so I'm giving away the punchline. This is about as fast as I've gotten the thing so far. But I do wanna talk about another algorithm called best first search. So again here's our tree. And at best first search you don't necessarily go down a branch, you don't necessarily go across the levels of the tree. You try to figure out which note of the tree is the next best choice to explore. And you're kind of working from a fringe of the ones you've already explored. But you decide, oh, what's the next best one? So that might come out looking something like this where you're kind of bouncing around the different levels of the tree. But you're not gonna always do the shortest solutions first. So that ends up looking like this. What you do instead of using a normal queue like we were using for breadth first search, we use something called a priority queue. And what a priority queue does is it sorts the elements in the queue based on some priority value so that the higher priority values come first. And it orders them by that way. So I found this gem called fast containers that has a pretty fast priority queue. It wraps a C implementation. I played with, there's actually a sorted set in Ruby. Don't use that. It's horrible performance-wise, especially if you need to iterate it. Because to do each it basically, oh, I forget what it does, but it's very bad performance. This one was actually pretty good. So again, you find the successors look for solutions. There's a couple of calls there to add path and add paths. And those look like that. So add path just iterates all the successors and calls add path, which pushes it onto the queue, it pushes the path onto queue along with the score. And I've got this queue set up to sort by lowest score first. And so the scoring method is what's interesting. And that's basically how you decide what's the best next state to explore. That scoring method has all the magic here. This kind of looks like small talk code, and that was a small talker. There's kind of a saying about small talk code is it always looks like the work happens somewhere else. And you can kind of see that here. The work's always happening somewhere else. Well, the score method is where the magic happens. So there's a bunch of different kind of best search algorithms, but the best one is called A star. There's an earlier algorithm called NUSA algorithm, and A star is an evolution of that that's a little bit better. And so an A star, like I said, each state is scored and the lowest score is best. And the score is the distance traveled so far, like how many moves we've made so far, plus an estimate of how many moves we have left to go. Now that's gonna be a guess because we don't actually know if we did, we'd already know how long the solution is. But we need some kind of estimating function. So that looks like this in code. The score method just computes path.length plus best estimate. Best estimate is basically go through all the active robots. There can be multiple active robots because there's one target on the board that any robot can move to, so there can be more than one. That's why I need a map here. Just find the estimate at that cell and find the smallest of those estimates and return that. Now there's a couple of properties of the A star algorithm. First of all is you cannot overestimate on your scoring function. So your estimate of how much distance you have left to go has to be lower than or equal to what the actual solution will be. If it overestimates, it doesn't work. And then there's another more convoluted property that if your estimation function is monotonic, which is a fancy math term, I'll explain in a second, then it's guaranteed that the first solution you find is the best solution. If it's not a monotonic scoring function, then you kind of have to keep going for a little bit longer, not too long. It's still not horrible, but a monotonic estimating function better. So what I mean by monotonic is that if I have an estimate of say four and I move one move, then I can't be any lower than three. Like I can't make a move to go from a four to a two. So that's basically what monotonic means. So now how do we figure out what's our estimating function? And again, I stole this idea from Michael Fogelman. Thank you very much. What he suggests is you pretend that the robots can stop anywhere they want. They don't have to hit a wall first. How many moves would it take a robot to get to the goal cell if it could stop exactly where it wanted to stop? So if you're already on the goal cell, that's obviously zero. You have no more moves left to make. Anything that's in a straight line from there is a one. And then anything that's in a straight line from the ones is a two. And anything in a straight line from the twos is a three and fours and fives. So from anywhere on this board, it's no more than five moves to get to the goal cell. Now my colleagues tell me that basically I could write a mind sweeper player now that I have this picture, but I don't know. Anyway, so that's the estimating function. I'm not gonna show you the code for it. This is something that you can pre-compute. As soon as you know what the target cell is, you can pre-compute all these numbers for the target cell and then just do a lookup so that code I showed you where I was doing estimate at a cell. It's just doing a lookup and a table like this. So like I said earlier, this algorithm should be better, but I have not been able to actually improve on the performance I had before. And the reason I think for that is that there are so many states that come up with the same estimate because a lot of times you're not moving the active robot and so your actual score hasn't changed. And so I need to mix this in with some other heuristics. And I've been playing with different ones like try moving robots that I've moved recently because often your solutions only evolve maybe two or three robots instead of all five. So try moving the robots that I've already moved first, some mixture of moving the active robot first and recently used. I've gotten close to the same performance as the breadth first search, but not quite the same which is sad, but that's how it's working so far. So I've got some ideas of where to go with this next. Like I said, I still have to, when I'm looking for stopping points, I have to figure out if there's any robots in the way. So maybe some kind of a collision detection algorithm would work there because the one I'm using right now is kind of still pretty naive. Maybe there's a way to work backwards. When I'm playing the game myself, I do this a lot. So figure out where the goal cell is and all those red highlights are places that the robot could stop before it moves into the goal cell. So if you work backwards from there, maybe there's a way to do that. I'm not quite sure how I'd implement that yet. Again, I talked about this already, try moving most recently moved robots first. Some combination of active and most recently moved. Maybe there's a way to choose which direction to try next more intelligently. Like maybe choose the direction that would move me closer to the same row or column as the goal cell. But a lot of times you have to kind of go up and around the board to get in. So maybe that wouldn't work. I don't know. Maybe there's a way to pre compute the per robot stopping positions and then just update that as I go along as a robot moves. Not yet sure about how well that would work. I haven't played with it yet. Obviously using less objects and more primitive types might speed things up. If I find more places where I'm just creating too many objects that would speed things up. I've still been running the profiler and trying to optimize hotspots. I had one great one. I found it. It looked like it should be a hotspot. And I added some caching and I went from three minutes to four and a half minutes solving my game. So not an optimization. You run into a lot of this kind of stuff. That's why the profiler is your friend because almost always when you're working with performance problems, your intuitions about what is slow are wrong. And so the profiler basically helps you, keeps you from fooling yourself. This would be an obvious place for some parallelism. Maybe I should get raised parallel aboard and play with that and try it on there. That'd be kind of cool. I haven't played with parallelism that much here. I could do other languages, but I mean this is a Ruby conference. So why would I, although I've toyed with the idea of porting this to Crystal and playing with that, which might be kind of fun. Anyway, that's where I'm at with the solver so far. I need to thank a few people. Trevor Yarich is a colleague at Zeal. He's one of our designers and he helped me with these slides. All the animations and the graphics were all his. The parts where the slides are ugly are things that I did to it after he had his hands on it. So sorry, Trevor. The other people at Zeal, I had people pairing with me on this and giving me ideas and advice. So thanks to them. Michael Fogelman I mentioned a couple of times. I got some ideas out of his talk about this. Trevor Lailish-Menaugh is the guy who introduced me to this game at Ruby DeCamp a couple of years ago. And I love it. And then all the screen captures are for more games. I've actually got the code up on GitHub if you wanna go look at it. I've got some time for questions. I'll be posting the slides to Speaker Deck shortly. They'll be there. And I'm also on speaker rate. If you wanna give me feedback, I'd appreciate it. I always like to get better at talking. And I think that's about it. I will mention I work for Zeal, which is a web and mobile application development consultancy in Southern Oregon. And we are hiring and we also have just started an apprenticeship program. It's a paid apprenticeship program. If that interests you at all, come and talk to me about it. And I have some Zeal stickers if you want one. And also Stellar High Five stickers. We started the site. You can actually go there and give people a virtual high five on Twitter. Just if somebody's done something awesome that you like, go give them a Stellar High Five and thank them for their contributions. So I've got some time for questions. Thank you very much. Okay, so the question is what profiler did I use? I was just using Ruby Prof and that actually works pretty good. And it actually gives you HTML output and you can kinda click to jump around the different methods that are being called. All right, so the question is, do I have any examples or ideas where some of the concepts here have affected day to day work? I mentioned early on, there's a project I worked on where there was a hierarchical data structure that we're representing. Basically like an organization chart. And we needed to be able to find, what was it? Common ancestors or common descendants from nodes in the tree. And so a breadth-first search were great to find kind of the nearest common node. So we actually, the guys that were working on it were asking on our Slack channel about how do we solve this? And I said, oh, you could try this algorithm, kinda walked them through it and it worked great for them. So that's one example I have. Right, so the comment is places where you can save yourself object creation can have a huge impact. In their case, in Ruby 193, they were about 550 times faster by not creating new objects when they didn't have to. Any other questions? All right, thank you very much. Appreciate it.