 Thank you all very much for coming. This is actually the first conference that I spoke at in 2010. It was really nice to come back here. I'm going to talk about recursion today by a brief introduction. I'm James in terms of Ruby stuff that I do. I created a project called Faye and that's a bunch of web socket stuff and that's now part of the stack of things that makes up Action Cable. But I'm not really going to talk about Ruby so much. I'm going to talk about a bit of Ruby, a bit of some other languages, but a bunch of ideas that are connected by this one idea of recursion and why it's important. So, not the wind, not the flag, neither one is moving nor is anything moving at all. I have discovered a great theorem which states motion is inherently impossible. These words are spoken by the philosopher Zeno in Douglas Hofstadter's classic book, Goethe-lesha Bach. We kind of know that it's false, right? We can move around. But his idea that you can break something down into an infinite set of tiny, tiny pieces and do that recursively actually turns out to have a lot of important applications. I'd like to talk to you about those today. Recursion was at one time a really contentious idea in programming. There was a lot of debate in early programming language design about whether it should be included. But I would just sort of take it for granted, right? And we're all pretty comfortable with it. So, for example, if you want to delete, recursively delete a directory, right? If you start with a file, then you just delete the file. Otherwise, if it's a directory, you have to delete everything inside it first because you can't delete an empty one. So, you have to, like, look up the children in the directory and then delete each of those recursively and then you can delete the directory. Likewise, if you want to spider a website or the whole internet, you start with a URL and you pause it and you go and look up the page that's at that URL and then you find all of the links in it. And for each one of the links you find, you just do the same thing again for each link you spider the link and you do that recursively and you will end up reading the whole internet if you run this probably. So, we're quite used to writing recursive functions. But what we don't so much think about is recursive structures. We kind of make them all the time, like when we generate HTML or a website or a bunch of files on disk. We're making recursive structures, but we don't really think about the structures themselves as having power and meaning or anything. And I'd like us to focus more on those structures and what they can do. So, by way of a brief example, in JavaScript, the way that inheritance works is that every object has this special field on it called proto. So, when you make an object, I can make an object up here, when you call object.create, that's saying like make me a new object that inherits from this one. And all that means is that it sets this special field called proto to the thing you've inherited from and you can make chains of those things. And then when you want to look up a property or invoke a method on the object, what JavaScript does is it goes like say if I ask for name on the last object I created down here, then JavaScript will go like, well, do you have a property called name? No, you don't. So, I'll look in your proto, whatever that points at. So, that pointed at P1, so it does the same thing over there. That also doesn't have a name field. So, it looks up P1's proto, which is the original object. And that object does turn out to have a name field. And so, we look it up on there and return the result. So, this proto thing forms a sort of a recursive chain between objects. And it exists for this single purpose of implementing inheritance. And you can understand how that works without, you know, you, I haven't just shown you all the code inside the JavaScript runtime that actually makes that work. Just looking at the structure is really enough to tell you how it works in a way. So, I like to explore this idea in three main areas. Those being correctness, performance, and meaning. And we'll find out what I mean by each of those in due course. So, I'll start with correctness and the idea of being able to prove that programs do what you expect them to. In Ruby, we don't do this so much. We use unit testing to check that our programs are correct. But unit testing is not proof. You can write a lot of unit tests for a program and still not find bugs in it. Proof means that you know that something is correct for all possible inputs. And recursion is a way that lets us do that. So, there's this idea of mathematics called induction. And it works a little bit like this. So, say you want to find the sum of the numbers from one to n. So, just to list out a few examples, the sum from one to one is one. Sum from one to two is three. The sum from one to three is six. The sum from one to four is 10. And if you continue this, you might eventually realize that there seems to follow a pattern in these numbers. And that pattern is that the sum from one to n is equal to n times n plus one all over two. And you can list out loads of examples. You'll never find a counterexample for this, but you can list out loads and loads of examples, but that doesn't constitute a proof, right? So, just sort of intuitively, like an intuitive argument for why this is true, you could go, okay, we'll take the numbers from one to ten. You can pair up one with ten and two with nine and three with eight and four with seven and five with six. And so now we have five pairs, which is n over two, of things that add up to 11, which is n plus one. So n over two times n plus one gives you the right answer. But that's an example. It's not a proof that it's true in all cases. So, the way the proof of induction works is that it makes a recursive argument. You assume that the thing is, the thing you want to prove is true in one case, and you use that to prove that it's true in the next case. As in, if it's true for n, you want to prove that it's also true for n plus one, and you use the fact that it's true for n to do that. So I'm going to write here, like, by definition, the sum from one to n plus one is the sum from one to n plus n plus one. You're just sticking the extra number on the end. So I'm going to assume that the thing we're trying to prove is true. For our sum from one to n, I'm going to substitute the expression that we believe it is, and then do a little bit of moving the terms around. So we can multiply that top bit out there to get n squared plus n over two plus n plus one. And then we can turn the second term into a fraction, making it two n plus two over two. And then we can add those two fractions together, giving us n squared plus n plus two n plus two all over two, which is just n squared plus three n plus two over two. And you can factor the top line into n plus one times n plus two all over two, which is really n plus one times n plus one plus one over two. And what we've done is we've taken our expression in n and made an exact copy of it just replacing n with n plus one. We've used the fact that it's true for n to prove that it's also true for n plus one. Now, all you need now is to know is to have some example where it's true. If you find some number where it's true, then you know it's true for all of the numbers after that by this argument. And at the start, we saw some examples, like beginning with one. So we know this is true from one onwards now. We can do the same thing with data structures. So similar to mathematical induction, there's a thing called structural induction, which lets you take this idea that we've used with integers and apply it to data structures. So in Ruby, we have arrays, which are these sort of flat lists of values. In a functional language, lists are defined recursively. So a list is either the empty list or it's a value paired with some other list. The way you'd write that in Haskell syntax is you say, like, data is how you define types. So a list of type A is either the empty list or it's something of type A paired with a list of A's. And then you define functions on those things recursively. So the list one, two, three, four is really equivalent to one paired with two paired with three paired with four paired with the empty list. So it kind of forms this tree structure, this sort of binary tree, which is a recursive structure and you can use that structure to prove things about it. So in an imperative language like Ruby, we'd sort of define things like the length of a list or mapping over it in this procedural way. So for a length, we'd start with n as zero and then for each element, we'd increment n and then we'd return it. To map over a list, we'd make a new list and then iterate over the input and push the result of applying the function to the item into the result array and then return it. So these define procedures, literary processes, sequences of instructions for how to do what it is you want to do. The way that you define these in a functional language is that you just make these statements that things are equal to each other. So for example, the length of the empty list is zero. The length of any other list where we're destructuring it into its first element and the rest of its elements is one plus the length of the rest of the elements. Mapping, so mapping over an empty list also gives you an empty list and mapping a function over any other list means that you apply the function to the first element and then pair that with the map over the rest of the list. So in Haskell, you don't write like functions aren't procedures. They're not sequences of steps that are executed one by one. They're just statements that this is equal to that. You can rewrite this as this thing. So I'm going to use this idea to prove something called the functor composition law and what that is, is a statement about how function composition and mapping work together. So Haskell has this function called dot which composes functions. So f dot g applied to some value is the same as f of g of x. All that's saying is that those two things mean the same thing. So the functor composition law says that if you map a function g over a list and then map f over the result, that's the same as mapping f dot g over the list. So you can combine two maps into one. So how are we going to prove that? Well, with numbers, we replaced something, we used a thing that was true for n to prove the case for n plus one. With lists, we can take something on a list and try to prove it for the list plus one more element. So I'm going to write this out. So map of f of map of g over a list with an element stuck on the front of it. Let's try and rewrite that. So we can use the definition of map, if you recall from a second ago, mapping over a list means that you can apply the function to the first element and then map over the rest. We're going to use that to just rewrite that term so that inner term becomes g of x paired with map of g over the list. We can then use the definition of map again to do the outer map. So now that we've turned that inner list into one of those colon expressions, we can take that g of x, apply f to it, and then pair that with mapping over the rest of the list. So then what we've got? We've got this thing on the right hand side here that looks like our case, our sort of n case, our just list without the element appended. And we are assuming, as part of our sort of recursive proof process, that that's equal to map of f.g over the list. We're assuming the result and using that to replace that here. We then also know that the first term, f of g of x, is the same as f.g of x from the definition of composition. And now we've got, oh, we've got the projector is cut out. That's cool. Okay. And now finally, we've got something that looks like a map expression. And in fact, yes, we can rewrite that as map of f.g. We're sort of using the definition of map in reverse, like going from the right hand side to the left hand side. So that's equal to map of f.g of the list with one extra element added. So we've proved the thing that we're trying to prove by just adding one more element to the list. You'd only have to find an example where this is true to know that it's true for lists of any length. And you can pretty trivially show that it's true for an empty list. So that means it's true for all of them. So there's a couple of interesting things here that will come up in later sections of the talk. So one is that we not only use a function definition to go from the left hand side to the right hand side. We also sort of go back. So that last step there, I was like, oh, that actually looks like the definition of map. So I turned it back into a map. It's sort of going backwards. The idea of going backwards will come up a little bit later. There's also a notion that you're avoiding by not actually interacting with this list, just looking at the first element and pulling that all the way through an expression. You're not evaluating things that you don't need to. And that really helps with performance. It enables Haskell to do very interesting things with lazy evaluation. So let's talk about that for a bit. So performance, right, we all like performance. We like things to go as fast as possible. But doing that, like getting performance out of things by hand is quite hard. Fortunately, there are some tools that if you express your problem in the right way, it can sort of optimize them for you. And recursion is one of the ways that we can do this. So as a sort of example of something that is quite a specialized tool that uses structures to improve performance, I'll talk about make a little bit. So imagine you were working on a project where you want to compile a bunch of coffee script files into JavaScript and then concatenate the result and deploy that on your website. So one way that you might do that would be to write a bash script. So you make it a directory to put the compiled files in. You iterate over all your source files. You run the coffee compiler over those to generate some JavaScript and then you concatenate all the compiled JavaScript into some artifact. So if I run this script against the coffee script project itself, doing that takes about 13 seconds. A different way that I could do this is to write a make file. So this is saying that my source files are just find all the files in my project that are called something.coffee and then the output, the JavaScript files, is you get the sources and you replace .coffee with .js. And then to generate your main file from the outputs, you just concatenate the outputs and put those into the target. To generate a compiled JavaScript file from a coffee script source file, you run the coffee compiler on it and your compiled directory just needs to be created with a makedir command. So this sort of says the same stuff. It describes the same operations that you need to do but it's described them in a particular way that means that make can do interesting things with it. So one thing that can do is if you run this just from a sort of clean checkout, it does all the same stuff that the previous one did. It compiles all the files and then it concatenates them all together. It takes about the same amount of time. However, if I just touch one of the files and then run make again, it will only recompile that file. So it's massively reduced the time to run this because the compiler isn't running over all the inputs. If I touch a few files and then run make again, it runs the compiler on each of them and concatenates them. It's doing those compilations sequentially. So this will take you about 3.4 seconds. If I pass dash j3, which means run at most three concurrent jobs, then it actually runs the compilation process in parallel. So it's locked a second off the run time there. The reason it can do that is because the way that you express builds in make is that you express dependencies between things. You're sort of describing the structure of your project to make. So it knows things like grammar.js is derived from. It depends on grammar.coffee and likewise for all the other things. It knows the ultimate output file depends on all of the JavaScript files. And it uses that information, that structure to optimize things. So it can spot things like that if you have only changed one of your coffee script files, you don't need to change all of the other. You don't need to recompile all the other JavaScript files because they're not affected by that one. It can also spot things like that each compilation is independent. Each coffee script file turns into one JavaScript file. It doesn't affect any of the other JavaScript files. So those being independent, you can run them in parallel. And it automatically figures that out with the dash j option to make your build faster. But it also knows something like the final output that depends on all the inputs needs to wait on all of those inputs being done. So just by describing the sort of structure of your project, you get it to go faster but still automatically do the correct thing. So that's a sort of specialist application, right? This makes a sort of a specialist tool for building projects. And it uses structures to improve performance. Something that applies this idea to computation in general is, again, if we go to Haskell. So a few more function definitions for you. We saw map earlier. I'm just going to add a couple of other ones. Filter is like select in Ruby. So filtering an empty list gives you an empty list. Filtering a predicate over any other list has two cases. So if this is saying if p of x is true, like the function evaluates to true for that element, then keep that element and pair it with filtering over the rest of the list. Otherwise, drop that element and filter the rest of the list. So we're keeping or dropping an element depending on whether the predicate applies to it. And there's this function called bang bang, which is similar to the sort of square brackets array indexing syntax in Ruby. It gets you the nth element of a list. So the way that works is that if you call it with zero, you get the first element. If you call it with anything else, then it recurses down. It gets the first element, goes to the rest of the list and decrements n by one. And eventually you do that enough times you get down to zero and you pick the element out of wherever you are. So let's say that I want to, for some reason, find the element number two of a list of the squares of the even numbers of an infinite list beginning at one. That one dot dot signifies an infinite list. So in Ruby, well, A, you can't construct an infinite list. But if you could, the filter would take forever because you'd block on, you know, the filter would look over the list and you never get to the map part. But Haskell, this evaluates and it does something quite an interesting way by using these equations between structures. So let's try to work on this a bit. So we can rewrite the infinite list as one paired with the list from two to infinity. And that means that using the definition filter, we can pull that one off the front of the list. And because even of one is false, we'll just drop the one. So now we have the list from two to infinity, we can pull pull the two out. Even if two is true, so we're going to keep that element. So we've got two paired with filter over the rest of the elements. And now we can use that in a list with the definition of map to pull that two out. So now we get two squared paired with map over all of the rest of the stuff. And then finally we get to the definition of bang bang. So we have a two at the end. So we haven't got to zero yet. So we know that what we need to do is just drop the first element of the list we're considering and decrement n by one. So we just drop the two squared off the front and turn two into one. No, so we didn't even bother evaluating two squared because we're not going to use it. Do that process again. You start with three, pull the three out. You drop it because it's not even. You pull the four out. You square the four. Again, we're at index one, so we'll just drop that element and go to zero. Five is not even. Six is even, so we'll pull it out and then we'll square that. And now we're down to zero. So that means like ignoring all the map and the filter stuff, we can just take that first element off. So that gets us six squared, which is 36. So an interesting thing has happened here. We've managed to get some information out of an infinite list without having to traverse all of it. And we've managed to sort of, what you notice about this process is that values are kind of streaming through it. In a language like Ruby, your select would run over the list and then your map would run over the result of that. So you get two iterations. When you evaluate things this way, you only get one because everything is sort of happening at the same time. You work on one expression a little bit to just do the next step of it. And then you can use that to step through some other part of the expression. So you sort of work on the whole problem at once rather than having to finish this one function before working on to the next. So this laziness means that like Haskell could do things like infinite structures and also means that it avoids doing work that it doesn't need to. So there's a huge performance optimization. Sorry, I lost my place a little bit. So the reason that it can do that is because rather than functions being defined as these sort of sequences of instructions as processes, they're defined as equations between structures. Like Haskell, you just say whenever you see something like this, you can rewrite it into this other thing. They're just rewrite rules. And it's those equations between structures and their recursive definitions that means you can work in this way. So the final thing that I want to talk about is, wow, time travel, which is going to sound a little bit weird, but it's the idea that you can run programs kind of backwards. So in Ruby when we write functions, you give them inputs and they produce return values. You can't give a Ruby function a return value and have it tell you what the inputs might have been. But there are languages where you can do that and they build on all the ideas that we've just seen. So I'm going to show you this language. It's called microcanron. And I'm going to implement it in Ruby. I'm doing it in Ruby partly just so you know like there's no tricks up my sleeve. What I'm about to show you like Ruby can't do by itself. It's just the result of the few functions that I'm going to show you and the structures in them that let it do this. So there's very few moving parts of this language. So we can get it done quite quickly. So there are two kinds of values. There are variables and pairs. So a variable is just something that has a name. It literally it takes a name and then it keeps hold of it. The only point of this is to have variables as values that we can talk about as actual things. And then you have pairs. So a pair is just something with the left field and a right field. It's like a simplest data structure that you can make and you can build any other data structure out of this by chaining them together into lists or by making trees out of them. You can do whatever you need. This is all minimal possible set of values to support what we want to do. You then have states and a state is a map of variables to their values. So a state is initialized with a list of the variables that it contains and then a hash that maps those variables to their values. And states are immutable. Whenever we change a state what we're actually going to do is return a new one that contains the change that we want to make. So two kind of changes you can make to a state are that you can create variables. So if you want to create variables you pass in a list of names for variables. You make some variables out of them and then you make a new state with your existing variables plus the new ones and your existing values. And you also return the new variables that you created. So what you often want to do with this is I just need to make some variables and then make some assertions about them. So you need both the new state and the variables that you want to use. And then you can assign variables a value. So all that does is it returns a new state with the existing variables and the values updated to map that variable onto the value that you want. So for example if I make a new state and a variable called a I can call s.sign a to 42 and what I get back is a state in which a equals 42. It's pretty simple so far. So then there's a couple of operations that are built on those methods. So one is called walking. Walking is like resolving the value of something. So this is for example if you want to know what a variable is bound to and then you want to if that variable refers to another variable that refers to another variable it will walk that whole chain and find you the ultimate thing at the end of it. So if the term you're looking for is in your values list then you just look that up and walk it again. It's a recursive process. If the term is a pair then you return a new pair by walking both sides of it. If you can't do either of those things you just return the term itself. So for example if you pass in a variable and that variable isn't bound to anything yet you'll just get the variable back but if it is bound to something you'll get its value. So you can use this to find out whether a variable has a value yet. The final bit is unification. So unification what it tries to do is return you a new state in which the two things you pass in are equal if it's possible to do that. So you start off by walking X and Y. So that's the thing if those variables have values already you'll get the values out. If they don't you'll just get the variables back. So if X is already equal to Y you just return the current state unmodified. However if one of them is a variable that means it's not bound to anything yet otherwise walk would have resolved to a value. So if X is a variable then you can assign Y to it. Likewise if Y is a variable you can assign X to it. And then finally if they're both pairs then what you do is you attempt to unify their left hand sides and if that gives you something then you unify their right hand sides. And if you get to the end of this method then you simply get nil out. So if you can't make the two things equal you get nil. You get a state or nil from this method. So that's states. So if we make a new state and create some variables if we try to unify A with three and then B with A then we get a state where both A and B are equal to three. If we try to unify a pair of three and A with a pair of B and a pair of five and B and by their left hand sides B should be equal to three. So that means this is a pair of five and three and that means A is a pair of five and three and indeed that's what it's figured out. So you just give it these sort of statements that things should be equal to each other and it will tell you what the world what state the world could be in for that to be true. The final bit of the language is for what are called goals. So a goal is just a function that returns a state. It takes a state and returns a list of states and it does that by applying some sort of condition to the state that you have. So we're going to implement this as just a wrapper around a Ruby block so it just stores off the block and it has a method called PCU which just it just calls a block. You could use bare blocks for this. I just wanted to put some methods on it when I was implementing this. So therefore main types of goal. The first one is called equal and that takes two inputs and it returns you a goal that tries to unify those two inputs. So if those elements can be made equal then you'll get a list with one state in it where they are equal. If they can't be made equal then you get an empty list. So if I make a new state and a variable and I make a goal that says a is equal to one then when I pursue that goal in the state then I get the state where a equals one because that's satisfiable. The next one is called bind. So the idea we've had all these examples where I make a state and some variables and then do stuff with them. Being able to do that as a single expression is really useful in this language so there's a goal for that. So you give it a list of names and a block and the block should return a goal of some kind. So what this does is it gives you a goal that takes a state. It will create some variables with the names you passed in then it'll call the block with those variables to get a goal and then it will pursue the goal in the new state that you made. So for example if I want to make a goal that says like some variables a and b are equal I can write that by saying bind a and b and then that gives me variables a and b and then make a goal where they're equal and then if I pursue that then I get a state where a and b are equal. Then there's two things for composing goals and these are the final two ingredients. So the first one is either so this takes two goals and it gives you all the states where either one of them could be satisfied and it does that by pursuing one of the goals in the current state and then concatenating that with all the states you get by pursuing the other goal. So for example if I introduce a variable called x and I say I want all the states where either x is two or x is three then what I get is a list of a state where x is two and a state where x is three. The other one is called both so either lets you satisfy either of the goals. Both only gives you states where both of the goals are satisfied and you do that by first trying to pursue the first goal and for each of the states you get back from that you try and pursue goal b in that same state. So you generate all the states that a can lead to and then from each of those go to all the ones that b could lead to. So as an example here if I make variables x and y if I want both x to equal eight and y to equal nine then that will give me one state where both of those things are true. Now what's useful about all these is that you can compose them. This actually forms a general purpose platform for doing computing. This is like I think equivalent to a Turing machine I'd have to check but you can implement any computable thing on it. So for example if I make some variables x and y and then say that I want a goal where both x is either red or blue and y is yellow and then I pursue that then what I get is a state where x is red and y is yellow and a state where x is blue and y is yellow. So you compose these things and it all just sort of works. Okay so I'll just show you this programming language probably not at all clear at this point what it's useful for or what it can do or how you program with it. So by way of one brief example you can use it to implement list operations. So earlier we saw lists as chains of pairs and we have pairs in this platform so you can use those to make lists. Sorry. Well yeah that was quite loud I'm very sorry. So to make a list out of a Ruby array if the array is empty we're going to return a special value called null otherwise we're going to make a new pair out of the first element of the array and then recursively process the rest of the array so you get this chain of pairs. To get a list out of a string then we just use that with the strings characters. We also need some methods for turning things back into normal Ruby values so if you have a pair and it's null then you get an empty array otherwise you put a list with the left value and then expand the right value to get the rest of the list. So here's another Haskell list operation append. So this takes two lists and returns a third one by concatenating them together. So appending an empty list to anything just gets you the second value right concatenating an empty list to something just leaves it unmodified. But appending any other list to why. What you do is you take the first element of X and then you pair that with append of the rest of the list to why. So if you imagine you've got your two lists you sort of shuffle off the first element and that's the first element of the output and then you recursively process the rest of the first input. And what that means is you sort of basically putting all the elements of the first list onto the front of the second one. So we can write this in the language that we've just implemented. It's a little bit more verbose where it looks like this. This says exactly the same thing as the Haskell code it just says it in a different way. So the way that you write functions in this is that you don't have inputs and outputs as separate things you just have relationships or relations. So rather than taking two inputs this takes three and this is going to describe the relationship between the inputs and the outputs in a meaningful way. So what does appending two things and getting a result mean? Well it means that either it's either it's both true that x is null and y equals z the second input equals the output or if we introduce some variables then what we can say is that if x is a pair of some value head and the rest of x and z is a pair of the same value head and the rest of z so x and z have the same first element is what that is saying. And it's true that appending the rest of x to y it gives you the rest of z then you're done. So this describes all the things that have to be true for an append function to work. So you sort of describe what appending means not just how to do it but what it means for an append function to do its job. So for example if I make some variables x, y and z and say that x is the string REC and y is the string URSE and I want x and x and y to be appended to give z then when I pursue that state what I get if I get the results out of it is it tells me yeah x is REC like you said and y is that and z is REC URSE it's concatenate x and y together for me I didn't write any code here to tell it to do that it's just figured out that z must be that for this to be true. So that's the sort of normal way that you used to programming inputs to outputs right but you can do go the other way so if instead of defining y I define z so I'm saying z is the string RECURSE then it will again figure out down here that like oh that must mean y is URSE it sort of goes backwards and you can do that not just on one input but on all the inputs so I could write something that says I'm not going to define x and y at all I'm just going to say that if you append them you get RECURSE if when you pursue that goal what you get is all the possible inputs that could have led to that output so it's sort of not just running something backwards but it's telling you all the ways that you could have got there and it's this expression of the problem in a meaningful way like it gives the impression that the computer somehow knows what append means but it doesn't it's really just all this is doing is using structures and asserting that they're equal and expressing a problem in constraints like this means that the computer can act as though it sort of knows what things mean you haven't just told it how to do something you've told it what something is and it can solve the problem for you rather than you having to do it so I've talked a lot about recursion and structures and things that you can do with it but I suppose the the big picture for me here is not so much recursion itself it's the idea of different ways of using computers to solve problems right so one way of using a computer to solve a problem is that you go and figure out how to solve a problem and then you figure out some recipe for doing it and then you just write the instructions for that recipe as a program and then you execute it that's your sort of bash script version of solving the problem the second way is to describe the structure of the problem in such a way that the computer can do more of the figuring the solution out for you that's your sort of make file version of solving the problem and a lot of things exist on a spectrum between these two things and what appeals to me and what I think there is a lot of space for given that all the sort of variety of tooling that we have in the Ruby ecosystem is looking for more opportunities to transition into sort of type two programming rather than type one programming so when you go back to your web apps or whatever it is that you do today try and think in terms of like how could I represent the structure of this problem so that I don't even have to figure it out it just sort of happens for me to me that's where the real power of computers is thank you very much for listening