 How y'all doing today? You good? You're revved up? You know what the difference between a keynote talk and a regular talk is? That I don't have to cut out all my material to get down to 30 minutes. I'm here to talk to you today. And this is essentially like a talk about what I did over my summer vacation, except this was more like last fall rather than over summer. But before we get into that, I want to talk about this guy right here. He was a physicist and did a lot of work in the early days, in the early 1900s, on the development of the atom and our understanding of how the atom is put together. He is a contemporary of Einstein. In fact, he and Einstein had a series of debates on quantum mechanics. The summary of that is that Niels Bohr was correct and Einstein was wrong. So can you imagine that? But back in the early 1900s, physicists had discovered that the atom was not a concrete little ball of something. It was actually made up of several parts. It had a very dense nucleus that was positively charged. And it had a bunch of electrons that somehow orbited around that, an electron cloud that was attracted to the nucleus because the electrons were negatively charged and the nucleus was positively charged. There was a very strong electromagnetic attraction between them. But what they didn't understand is how that electron actually maintained its orbit. Because you see that they had in their minds something kind of like a planetary model for the nucleus of the center. And the electron was spinning around that nucleus very quickly. Now they knew that if you take a charged particle and you move it back and forth really fast, that generates an electric field. An electric field takes energy to generate. So if an electron was orbiting the nucleus really, really fast, then it had to be giving up radiation. And because it was losing energy, it would eventually spiral in and collide with the nucleus. So what was keeping the atom apart? Or Bohr thought about this. And I've been told that he had this in a dream, although I tried to look it up on our source of all knowledge, Wikipedia. And he didn't say anything about this. I understand that he saw this in a dream, the explanation for that, which we will get to in a second. There's a really interesting phenomena that if you take any material and heat it up until it glows, it gives off radiation. Now not just general radiation, but the radiation comes in spikes. So you have a very strong spike here in the red and in the yellow and in the greens and the blues. And different materials have entirely different signatures for how they generate these patterns. And no one knew why everything was different or even why they gave off this particular spectral pattern. So they took a look at hydrogen. They noticed that hydrogen had a very simple pattern. It had one line down here in the red and another line up here kind of the greenish blues and then a blue, then the ultraviolet range. And these were very set distances apart. The differences in frequencies of difference in energy. And so Thor had the idea of what if the electrons when orbiting a nucleus could only exist in certain energy states? So that when an electron moved from one state, from a high state to a lower state, it would have to give up energy because these closer to the nucleus would be lower states, away from them would be higher states. So as it moved closer, it would lose energy and it would emit a particle of light. And because the different states were discreet, it could only move from one state to another and only emit very distinct patterns of light. And so that was the explanation. So here you have, as we move from energy state one to two, you would emit or absorb a red line of light energy. Or between two and three was yellow. Between three and four was green and so on. So each energy level had its own quanta of light energy. Quanta, quantum mechanics. And it was the very beginning of the idea of quantum mechanics, the physics of the very, very small and the very, very fast. And Bohr was a leader in that thought. But this kind of idea was kind of like mind boggling. No one ever thought of this before Bohr did it. As an example of what I call lateral thinking, thinking outside the box, give it a problem. Can you solve it in a way that no one has thought of before? Can you think outside the conventions and do something different? And that's what Bohr did in solving the question of how the electrons stay in orbit around nucleus. But there was an interesting question raised because of that. If moving between orbits in an atom releases a particle of light, what's very interesting? Because scientists at that point believed that light was actually a wave. And they had good reason to believe that as well. If you take a wave source, like light or sound or even waves in the ocean, and you force them to go through two splits like this, what you will get, what you might expect, is to get two different patterns. If I cover up slit number two, I would get this intensity graph right here. If I cover up slit number one, I would get this intensity graph right there. However, if I leave them both open, I don't get the sum of the two intensities. I get something entirely different. And that's because the waves interfere with each other. As one wave is reaching its trough, the other one is reaching its peak, and they tend to cancel out. They tend to cancel out in a pattern that looks like that graph on the far right. So if you look at a light wave, you can actually measure it. And you can see the interference pattern. And for a double slit interference pattern, it is obviously exhibiting wave behavior. So is light a particle, or is it a wave? Is it a particle I more believe, or is there a wave like the double slit experiment showed? Now what if you took, instead of a wave, you took single particles, like electrons. You shot electrons through a double slit like this. Well, if they were particles, and you shot one at a time and did it a whole bunch of times over and over and over again, you would get two lines and bullet holes, right? Two lines, one for one slit, one for the other, and they wouldn't interfere with each other because you're sending one bullet at a time. And so there's no chance for it to interact with something going through the other slit. It either goes through this slit, or goes through that slit. So you should get two patterns. So let's take electrons. Let's shine them through a double slit. And you get this. Well, doggone it. That looks like wave interference. And you get this, even if you slow down the electrons so they go through one at a time. Instead of getting particle behavior like we believe electrons to be, we get wave-like interference behavior. Bizarre. That's weird. In fact, if you take electrons and shine them through a crystal, you get this kind of marvelous pattern. In fact, we can tell the structure of crystals, mainly through electron diffraction, by shooting electrons and seeing how they scatter and interfere with each other. Fascinating stuff. But we've got this whole wave versus particle duality. Our electrons, particles, we certainly thought they were, but they act like waves. Our light waves, no, they act like particles sometimes. So when you get to the realm of the very small, we've got this strange duality thing going on. At one point in time, this was our model of the atom. Bohr taught us that they actually existed in separate orbit. So we improve our model of the atom. And today, we understand the atom a little bit differently. We understand the atom is a series of probability waves that are shaped something like this. So this is kind of the true modern picture of the atom right now. Now, which of these are the true picture? Is this one or this one or that one? And the answer is, none of these. These are models. And models are neither correct or incorrect. They are either useful or not useful. If a model is useful for understanding something, then that's a good model. If the model gets in your way and prevents you from using it effectively, then it's a bad model. For a lot of purposes, this is a perfectly good model of the atom. For other purposes where deeper understanding, you might need to use this model of an atom. For elementary schools, this might be perfectly good for explaining a lot of things. So models are important. Models come from abstractions. Abstractions are a good thing. Abstractions are neither correct or incorrect. They are either useful or not useful, depending on what you want to do with them. So three things I want you to keep in mind for the rest of this talk. Number one, lateral thinking. Think outside the box. Think of things that people haven't thought of before. Number two, dualities are cool and useful, and embrace them when you can. Number three, abstractions are good and powerful things. So let's keep those three things in mind. Now we'll get to the meat of my talk. I'll start last summer at the software craftsmanship conference in Chicago that I went and attended. And at that talk, something I call the rule of three happened. Right here, something mentioned three times a quick succession. That means it's probably something I should look at and investigate. And during that time, during that conference, I heard Bob Martin, the object mentor, mentioned the structures and principles of computer programs and MIT introductory text. I heard Michael Feathers mentioned the same text. And there was a third person who I now forget, but they also mentioned. So three times a quick succession, I heard this book mentioned. It's called The Wizard Book, for obvious reasons. And this is an introductory text used at MIT for beginning programmers. And the recommendation was that all developers should take a look at this book and see what it has for them. And I was thinking, an introductory book? I've been programming for at least as long as Josh now. What could this possibly have of use for me? Well, we know it's an introductory book because the very first exercise is this. Please evaluate these expressions. 5 plus 3 plus 4, 9 minus 1, 6 divided by 2. So very introductory. We also know it's an MIT book because this is the last exercise. Produce a scheme interpreter written in C. Yeah. OK. Remember, this is the introductory text. MIT Computer Science. So yeah, mind boggling. So I sat down, and I decided I was going to pick up this book. And several other people at the conference decided they would do it as well. We started a Google Groups email list where we talk. And we also set up several online meeting times where we would get together with Skype and Screen Share, and every week talk about the different sections of the book as we read them through them together. Now, a lot of people have done this. And I encourage, if you've not done this, I encourage you to give this a try. We got up to and through chapter 2, and we're starting chapter 3. Chapter 3 is where it gets interesting. Our group kind of fell apart, but I've not pursued it since then. So I'm going to share with you the things from chapter 1 and chapter 2, and I will let you imagine the things from chapter 3, 4, 5. So the examples in this are going to be in Ruby, but they're going to be in very non-idiomatic Ruby. Something bizarre and weird that you probably haven't seen much of. So we'll start with this. This is from the first chapter of the book. It is a square root function. It calculates a square root given any non-negative number. It will calculate the square root value of that number. And it does so by a series of approximations. It starts off with a guess. A guess is 1. It says, now, while, now, if you take that guess and square it, multiply it times itself, and subtract our original number from that, we'll get the difference between the square of our guess and the answer. And when that difference is less than some arbitrary limit, then we're going to say, hey, we're done, and we return the guess as the square root answer. So we get arbitrarily close. So we can make this number as big or as small as it's useful for us to do. Now, while that's not true, while our guess is wrong, we need a way to come up with a new guess that's better than the one we have. Well, if the square root times itself is equal to x, and then if we take x and divide it by the guess of the square root, we're going to come up with another number that's better than our guess. And if we average our guess and the number divided by our guess together, we're going to come up with a new guess. So that's this. We take the guess, divide x by the guess, and average those two numbers together, and we're going to come up with a new guess that should be a better guess than the one we have. So if we run square root 100, you'll look like this. The first iteration, our guess is 1, and divided by guess is 100. So the new guess will be the average of those two numbers is 50.5. Next iteration, our guess is 50. The new guess is almost 2, and our new guess is 26. The next iteration, our new guess is 15, then 10, and 10.03, and then 10.0005. So you see that we're zeroing in on 10, which fortunately for us happens to be the square root of 10. For those of you who are the square root of 100, for those who are math challenged, it actually zeroes in very quickly. That's about six iterations through that, and bam, we got a very, very, very good estimate for the value of square root of 10. So this is called the Newton approximation method for square roots, and it converges very quickly. That doesn't mean very many iterations to get a really good guess. And we run this, and we can see that this is the answer that we get when we run that. Cool. So let's look at this. We can break this code down in several different ways. Here's one piece. We ask the question, is the guess good enough? Well, we can make that a method, so let's do that. Let's write a method called good enough to give it the guess in the original number, and while it's not good enough, we continue in the loop. The second piece we could look at is called improve the guess. So let's replace that with a method called improve the guess. Now, this is a very interesting piece of code, and here's good enough, and here's improve the guess. They're just copies of what was originally there. Now the interesting thing about this piece of code is where is the square root logic? What makes this algorithm particular to finding the square root? Well, it's not in this code anymore. It's all in the good enough and the improve guess algorithm. So we've pulled out the square root nature of the algorithm and put it into two separate functions, and this is now just some kind of general approximation loop that can approximate anything that we want. We can find the root of anything, not just square roots, but cube roots or any polynomial variable, just by plugging in the correct values of good enough and improve guess. So let's do something interesting. Let's parameterize this. Let's define find root that is a general root finding method that takes a number. We will pass in the good enough and the improve guess as parameters to this function. We've pulled out the square root nature of it. We've made it a general purpose function. Now we can find any root just by passing in the right values for these. Now, since these are now functions, we're gonna call them with the function calling syntax, and this is a Ruby 1.9 idiom right here. If these things are essentially lambdas, we could do dot column then, but we're gonna use the Ruby 1.9 syntax, which is a little bit shorter. Now to use find root, okay, and oh, I gotta mention this. So good enough and improve guess, instead of being methods like this, go into becoming lambda objects. And again, we're using the Ruby 1.9. We call them stabby proc syntax. How many people here use 1.9 regularly? Excellent. It's time to move, guys. Really is. Stabby procs is essentially like a lambda call that allows you to pass in parameters and do the normal default value things, and the star args things that you can with normal methods. You can do these lambdas now using stabby proc notation. And so this creates a procedure object, a function. We assign the square root good enough, a square root improve guess, and then we can call find root with 100 and pass in the two methods that we need to find square root. And this will work. We run this and it gives us the same exact answer as our previous code. Cool, that's neat. It's awkward to call though. I don't want to have to pass these two functions around every single time I call this. So find root is a method that finds a root. What would be really useful instead of finding a root if I could construct a function that found a root for me? So find root I have to pass in good enough and improve guess every single time I call it. But what if I did this? Let's rename our function to make find root. So this is a function. This is a method that returns a function. The function it returns will be this find root function essentially, but we do it by passing in good enough and improve guess, and within the lambda we bind good enough and improve guess to those usages right there. So what we get returned from this function is not a root, it's not a number, but it's a function that calculates the number we have constructed on the fly a function building that will on the fly construct a function that will find the root for us giving the proper good enough and improve guess methods. That's kind of mind boggling guys. Just think about that. We are building functions on the fly. So we would use this like this. We have a square root function object here, a variable, and we assign to it the return value of make find root passing a square root good enough and square root improve guess, and then we can call square root on 100 just like we did before. Now we've got the nice convenience of this, but we've wrapped the whole thing into function building procedures. That's cool. Now this is easy for square root because we know how to calculate whether it's good enough and we know how to calculate the improve guess but what if we wanted to construct other arbitrary root finders? What if we wanted to find the cubed root of a number or the fourth root of a number or find the solution to the quadratic formula? We need to come up with a way of specifying these two functions. Wouldn't it be nice if we could take the function we're trying to find the root of and pass it into a function that said make good enough and it would create the good enough function and create the improve guess function. So we give it the square function, it comes up with the good enough and the next guess function for us. And it would look like this. Oh wait, what's those question marks there? You have to write that yourself. Actually if you read the book it will tell you how to write the code that goes in that question marks. That's a little bit too math deep for my talk right now. I understood it but I read it but I'm not sure I could explain it to anybody yet. But it has to do with taking the function calculates and derivatives and building new functions around that, adding dampers if you need it and it's really fascinating stuff. But we're using functional abstractions to build up more complex layers of finding this stuff. So at the end we can just say things like, make a root finder for some arbitrary function and use it and that's magic. That is absolute magic. The key thing about this, this is bizarre for Ruby programs, for Java programmers or for even Python programmers probably. We are not used to composing functions in this manner. This is an example of lateral thinking for us Rubyists. You don't normally write code like this. However, many people in here are JavaScript programmers. This is not that uncommon. JavaScript is very much of a functional type or a flair to it for building these things. And in JavaScript you tend to do a little bit of this building up things and using closures a little bit more than you do in Ruby. Embrace the difference, embrace the differences. It's good for you. So the other thing I learned from chapter one functional abstractions are really, really powerful. They do a lot more than you think of if you come from a non-functional language you think oh a functional language is just about calling functions. No, it's not about calling functions. It's about creating functions that do particular things. A really powerful abstract. That was chapter one. Let's move on to chapter two. Chapter two is about data structures. So chapter one was all about building functional abstractions. Chapter two was building data abstractions. And they used a very, very simple concept to base all the data abstractions on. It's the idea of the con cell or of a pair. You take two things and you cons them together and you will get this con cell that points to the first thing and to the second thing. And these things here can be arbitrary. They can be numbers in this example. They could be other con cells that can be quite complex. You also have two functions called car and could. That given a con cell will return either the first or the second item. So we got the idea of a pair. You can build a pair and you can take a car and a pair. With these three operations, car and car, which are historically named and probably unfortunate. We can build any data structure we want to from those three operations. If you take a three and cons onto a bill you get a list of three. You take two and cons it onto a list of three. You get the list two and three. So cons is building up a list. Car is about taking apart the list and giving you the head of the list. Cudder is about giving you the tail of the list. So you can see you build it up and tear it down using these two functions. Consists of a cell. Car gets the head, cudder gets the tail. And you can build arbitrarily complex data structures with this. This is a fundamental data pattern that we can use to build anything that we want. Okay, so if we were to do this in Ruby, this is an obvious implementation. Let's create a list data structure that takes a head and a tail. And I'll construct a new struct object we'll just use struct here. And cons then just creates a new list object with the head and tail in it. And then the car function will take a list and return as had. And the cudder function will take a list and return as tail. Easy to do. So we can replicate this very easily in Ruby. We're also gonna write a couple support methods. For example, to take a list, an array and turn it into a list. That's just a convenience thing. Some things that turn out lists here. And once we do that, we can write code like this. We can take an arbitrary list structure. I'm just using an array because it's convenient that I turn it into a list. And I can display the list. I can take the cars and the cudders and change those things like that. When we run it, we get these answers. You can see that we display the entire list. We get this list exactly. The car of the list is the first element. The cudder is everything after the first element. The car of the cudder and the cudder of the cudder are these things respectively. So yeah, you can see we can pick. Build them up and we tear them down. Okay, time for a list. We're building our basic data structure with something in Ruby that is actually quite complex. We're using classes to do that. What if we didn't have classes to work with? What if I didn't have the structure? What if I didn't have the ability to build a class in Ruby? What could I use to do that instead? We'll do this code for a second. I'll let you look at it. What if cons, instead of returning a data object, returned a procedure? And what if car and cudder, instead of getting the head and tail of that data object, called that procedure with either a h symbol or a t symbol? Well, if I build a list like this, we're returning a procedure that takes a single argument. When I call car on that, it will give me, it'll say is the parameter equal to the head, if so, return h, otherwise return the tail. So I have something that is functionally equivalent to our list data structure without using a data structure at all. I have built a data structure out of a functional abstraction. So let's run this. Let's run this exact same code, but instead of using our struct as a list, let's use the proc as a list library. And we see that it works exactly the same. Now, remember I said that the concept would be used to build arbitrarily complex data structures? And I proved that we don't even need data structures to build the console to begin with. That means we can build arbitrarily complex data structures using nothing more than functional abstractions. It frees us from implementation details, which is a really interesting thing. We're dealing with abstractions. We're dealing with the abstract idea of a pair of things and be able to build that pair of things up and to take it apart. But the actual implementation, whether we're using a struct, whether we're using crocs, or whether we're using the address register and the decrement register, by the way, the contents of the address register is car, and the contents of the decrement register is cutter. That's where the names car and cutter came from. They were the machine registers used to hold the two pieces of the concept. The original implementation of this. So again, a little history on the names car and cutter. It doesn't matter whether we're using machine registers, procedures, or Ruby-ish structs to build those consoles, we are free from the details of that implementation. We also have this cool code versus data duality thing going on. We see that code can be data because we're using code to generate a console. I don't know if you picked this up as well, but also data can be code. And this is very apparent from your programming list where this looks like code, but if I do this, all of a sudden that code right operating on the code and returning pieces of the code. So list code, data, data code, it's all kind of intermingled together. Again, a duality that we should have breaks. We can do a little bit of that with Ruby, but not knowing as clearly as we can in the list. Okay, more cool chapter two things. We have complex numbers in Ruby, but why did we were to implement them from scratch? We would start with a function, oh, make complex, it takes a real part, an imaginary part, we have concepts together. Again, it doesn't matter whether we use the procedure version of cons or the data structure version of cons, you don't care. The RE function pulls out the real piece of that and the IM function pulls out the imaginary piece of our complex number. Now we're gonna start writing things like complex add and complex subtract that use the abstractions of a number that has a real part and imaginary part and does something interesting with them. Add a complex number, you just add the two real pieces together, add the imaginary pieces together. Likewise, subtraction works in a very similar way here. We can take, we can make two complex numbers and we can print them out and then print their addition and to run that code we would get this. So it works, cool. So what are the limitations of this particular implementation of complex numbers? Well number one, this function assumes we have implemented the complex number in a particular way. Now, we're using cons cells and we can implement the cons cell anyway we want to, but we're assuming that they are implanted, the complex number itself is a cons cell. With a real part and imaginary part, what if we wanted to implement polar complex numbers where we store the angle and the magnitude of the number rather than the real and imaginary part? Well, we have problems here because we're pulling out the pieces explicitly here and you couldn't mix a polar number with a rectangular complex number and work with them together because their implementations would clash and this code, the part about the real part would pull out something different if it was given a polar version. So we gotta get a little more fancy than this. So if I were doing this in Ruby, the answer would be obvious. I'd create a class, I'd implement some accesses called RE and IM and I would initialize the number with the RE and IM parts just like this. Here I have a complex number that is pretty much equivalent to that funky Ruby code that you saw earlier. And if I wanted to do a polar number, yeah, here's the key part, it has RE and IM in it. I did a polar number, I would initialize it with magnitude and angle, then to calculate the real part, I would take the angle of gradient which is calculated down here for me and take the cosine of that and multiply it by the magnitude and the imaginary part would be the same thing, instead of using the sine or the cosine. This is just mathematical definition stuff to all of them here. But the key is now I have a polar complex number and I can write code that uses RE and IM and my code no longer cares whether I'm using a polar or a rectangular complex number. Now, what's this called in Ruby? This is duct typing or polymorphism or something we're taking advantage of the o-o nature of Ruby to get flexible implementations of our code. The functional version of that wouldn't work because it always assumed a rectangular implementation of complex numbers. So if I were to be something similar using functional techniques, what would it look like? Well, my complex number would be a function. And the make complex would be returning a function that of one argument and then when I wanted the real part you would ask complex number for its real part. And it would pass the symbol in and we would look it up and this is a hash in Ruby one nine. You pass it a colon RE, we would look up the value of that would be R. And if the imaginary one looks up the value of IM and the hash and we return that. So the complex number is a function that internally does a hash look up of RE and IM and just returns whatever value it has there. Polar version, we've been more complex but that's the very similar thing. We return a function, take some method as an argument. It looks up a hash but here instead of being values in the hash we got functions in the hash. And so we just call the functions immediately that do the calculation. So when I have a polar complex number and I ask for its real part it will calculate the real part for me. I implement a polydorfism in a functional language by using the functional abstractions. So I can make a complex number and I can make a polar complex number and this number here is the magnitude and angle of essentially what was our other example, three and four, three comma four. So this is the polar coordinates of that same number. So when we print it out, yeah, we have one, two, three and four. Now three and four is actually this number in rectangular coordinates. And then the sum of the two. So by using functional abstractions for polydorfism I can get the same benefit that I would have gotten OO. Cool thing about this example, or cool thing about the book in general is that when we do abstractions in Ruby we tend to do two level of abstractions. We tend to implement our class here and then we have code that uses that implementation. Which works, that's fine. The book tended towards a three level layer of abstractions where at the lowest level we implement basis features like getting the real part, getting the imaginary part. Then on top of that we write more complex things that use nothing but the basis. So the basis is the base functionality and uses low level features. The second layer uses only the base functions below it and then everything else uses complex. So Ruby has tend to combine these two levels, I would say in general, not always, but in general. But this book really called out these two features. Now what's cool about the three level implementation is that I can change this implementation layer very, very easily. And I don't have to rewrite a lot of the secondary implementation. It's not all blown together in a single class. It's going to be a little better. So that's really nice, I really like that. Okay, so again, let's summarize. We've got this whole object versus function duality going on, object versus closures. You can implement objects with closures, as we saw here. You can implement closures and objects like Ruby does. A Ruby's closures, lambas and proc objects are just objects, but they're closures as well. Chapter three, chapter three is where I'm stuck right now. I've not had the time to get back into it. But chapter three gets into assignment statements. Realize, when you reach chapter three, you read through this book and you don't even realize that you've done every single exercise, like there might have been 20, 30, 50 exercises up to this point. Everything you've done at this point has never used a single bit of assignment. And that's because the programming model they use to explain assignment is actually much, much more complex than the programming model you need to explain recursion and just binding variables. So you don't even miss assignment until you get up to chapter three and realize you haven't even used any assignment statements yet. So chapter three is about learning how to do assignments which is odd that it's not until chapter three you get to that. So chapter three is school stuff. Chapter four is a lot about metaprogramming and building meta interpreters. Then chapter five is a lot about the implementation of building VMs and things to run these different versions of the scheme. Okay, so summary. Number one, what I want you to take off of this thought, away from this thought, is the idea of using functional or to use, look for non-traditional solutions to your problems. If you're doing something and it hurts to do it this way, take a step back and look to see, is this really the best way to do it? Maybe there's a different way I can go about this problem and go about solving it that might be better. I look for non-traditional solutions, look for solutions outside the box when you're working on things. Find good distractions. I think the Rails three talk that we had today, that you could have talked about and the active model, active record, is Rails evolving to better distractions and finding the right abstractions to do the right thing. So you can see that in the whole Rails community. We are searching for these good abstractions because abstractions are not right or wrong, they're useful or not useful. We're finding the useful versions of those abstractions in our library right now. Look for those abstractions in your code. I'm a big fan of doing test-driven development. I think that by writing your tests and using your code as a client through the tests, you will see, you will find the abstractions, they'll help you find the abstractions that are better for your code rather than the ones you might just think of. Think about the top of your head. Finally, embrace duality. There are things that we cannot describe as singular, precise manner. Sometimes they act like this, sometimes they act like that. Recognizing duality is a wonderful thing. It's a fun thing to embrace. Resources for this, the whole structure and interpretation of computer programs, SICP, is available online. It's an HTML version, it's PDF version of that, and there's also EPUB versions. So if you want to put it on your iPad or Kindle, that's pretty easy to do. Here's the point to go to get that. Our study group is the Wizard Book Study on the Google Groups. It's not very active right now, but if you want to get on and stir something up, I'm sure someone will be glad to talk to you on that list. This whole presentation right here is up on the get-out, so you can check it out. No source code I used in my examples, and a lot of source code I didn't use, that didn't make it into the examples, is also in that project, so you clone that, take a look at it, and that's it. I think we're doing okay for questions. We have plenty of time for questions. I can put the microphone in the center aisle if you want to line up, and then we can also take some people with us. Questions? That was a great talk, and this question isn't really gonna be about it. But this is the first time that I saw that. Yes. And I'm wondering if you share my opinion that's being worried about that, right? Actually a reason why you couldn't implement it. Yes, there is a reason. I won't disagree that it is ugly, but I think being able to call them just parentheses would add too many ambiguities to elsewhere in the language that would be actually hard to do and actually be inconsistent, because nothing else in the whole language is invoked with merely parentheses. That would be the only instance where parentheses cause an invocation, and therefore would be inconsistent with the rest of the language. Everywhere else in the language, we need to do it implicitly when you're calling yourself or doing it through a dot. So it's... We're just the rented calling method itself. Oh, I took you along. The calling method itself. It is a method with parentheses. The parentheses don't cause the call. Pertheses can be omitted. So the parentheses are not causing the invocation, and that would be the only place where the parentheses cause the invocation. Everywhere else the invocation is caused by a dot. So it's actually very consistent, if a bit ugly. It looks just like a zero length method. There you go. Hi. I'd like the two-level, three-level abstraction. I'm searching. I wanted to bring up a couple good examples of what would be a three-level abstraction. Okay, good, good. Do you normally use each as a basis method? Okay. And do you normally use the nearest station operator comparison as a basis? Okay. They would provide a lot of additional open-use basis methods. That's good, that's good. Yeah, basis methods. The things that do this... So in that example, each is your basis method. And you implement on top of your basis method, inject and map and collect and select and all those other innumerable methods. They're implemented on top of a basis method, which is each. So yes, there are examples of three-level implementation in Ruby, but a lot of times we don't think like that. Yes? Scheme and list words for the problem. I don't want to turn this into one, nine, two, or ashing thing, but I get the same question as the first asker this time about the stabby problem. I don't understand why we would consider that as a closer to a problem. So I just want to give you any time I talk about it. That's all I want to talk about, okay? That's, okay, a slightly different question than the first asker. The first asker was asking about the doc parenthesis. You're asking about the most stabby problem thing. Okay, opinion time. Okay. I think Ruby's PROC versus Lambda versus stabby PROC thing is out of control. We have, we have stabby PROCs, we have PROCs and we have lambdas. And how do they differ? Someone tell me quickly, what's the difference between a Lambda and a PROC? Arity check and the return is local scope or non-local scope, okay. Two differences. No, back to the Arity too. So you've got Lambda and you've got PROC.new. Which one does which? I don't know. I don't know. Yeah, someone said they're both the same. And you say no, they're not. They're not the same. PROC.new's PROC. And all versions of Ruby, Lambda has Lambda syntax which checks the Arity and does returns from, it's from the Lambda. Return in the body of a Lambda, returns from the Lambda. A PROC.new has assignment semantics for the arguments in the list. Which means the same way you do parallel assignments, those the same semantics are used to bind to the variables of a PROC.new object. And a return in the PROC.new doesn't return from the PROC, but returns from the function in which the PROC is defined. That's actually a useful distinction and I like that distinction. But what really is interesting now, if you say PROC with curly braces, which semantics does that have? Sixth and one nine. Sixth and one nine, okay. That means, what that means is that in Ruby 1.8, when you said PROC, it had Lambda semantics, not PROC semantics. Sixth and one nine. But that still means that we had these two things fighting and the stabby PROCs were introduced merely to get around syntax limitations of what we could do with parameters in Lamb, excuse me, in Lambdas. That you couldn't have default values for the parameters in Lambda. You couldn't have, say, star arcs or anything like that in Lambdas because of parser issues mainly. And after RATs introduced stabby PROCs, someone came up with a parser hack to actually fix that. So, yeah, if I had total control of the language, yes, I would go in there, I would straighten that part of the language out. I'm not sure how. Jim, do you find it confusing that stabby PROCs actually created Lambdas into your language? There's another. Let's just all stop calling it a stabby PROC. You know, that's a good solution, but I like the stabby part. Can we call it stabby Lambdas? It's actually a lambda better. We're not here in the language anywhere. We just, for some reason, call it stabby Lambdas. Stabby Lambdas. I like the stabby part of Lambdas. Your goal in this conference is to come up with a new meme for calling that stabby PROC something else that doesn't confuse people. Nice. I'm going to interject. I'm going to take a very nice prerogative and ask you a question myself. Okay. Since I got my microphone. When you were showing the functional equivalent of polymorphism. Yes. But, okay, here's my function that I can pass a couple other functions in and we'll create another function that I can use in a polymorphic way. I looked at that and I said, oh, this is JavaScript. And I suddenly saw where all of the way of creating the polymorphic things with closures for states and functions, a hash of functions. And do you find that having worked through the wizard book, a couple of chapters, that that makes you more comfortable with doing polymorphic JavaScript? Absolutely. Does working through the SICP book change the way you got JavaScript? Absolutely. Before I read this book, all my JavaScript classes were very Ruby-like. I would define them, I would put methods in there and define them very much like that. After I had read a little bit of the book, I was working on a JavaScript class, an object. And I realized I was just closing things in closures and returning them in a very natural way. It's entirely different than what I did before. And it was a very unconscious change. I didn't deliberately go out and change the style of doing JavaScript, but the book definitely affected how I did it. And I was talking to one of our other developers at Edgecase, and they said, yeah, that's how they approach our JavaScript programming as well. Has it now gotten to the point that you could look at someone else's JavaScript and tell whether they have an object oriented or a functional background? I'm not sure I can to that degree. I will admit to being a very a hack at JavaScript programming at this point in time. So I'm not, I don't do as much of it as I do Ruby programs, so I can't say that I could. But definitely, I believe that everybody speaks with an accent with a program. That accent is determined by all the other languages that they have used previously. Jim, thanks for hearing, Tom. I want to ask a question, having nothing to do with the syntax of Ruby. Okay. Something quite a lot more abstract. Having to do with duality between the intention of what the program is going to do and the code itself. We've spent a lot of time looking at actual code. There's other ways to represent this. Things like Charles Moyni's intentional programming. Could you talk about the far future for a second? Far future, I don't know. I'm not good at talking about far future, but let me change your question a little bit and address something a little bit near and near to my heart. And I am working on a talk that I'll be giving in a couple of weeks at JRubyConf. So here's the obligatory plug for JRubyConf happening in the first of October. And I'm going to give a talk there about testing. And one of the themes of the testing talk is that as developers, we tend to write our tests about the implementation, how we implement the code. And I will claim that is the wrong level of programming. We need to write the tests as specifications. And it's more, it's more than just switching from test unit to R-spec. R-spec changes the language that we're using, but I find that we actually have to change the way we test as well to take full advantage of specifying the behavior and not being tied to such brittle tests that break as soon as you change some internal implementation. That's kind of one of my hobby versus right now, because I've just recently run into a code base that does a lot of that very internal implementation testing. Does that kind of what you're getting at? Or do you want me to rob my horizon? No, but that was great too. Thanks. OK. Dodge that bullet. I'd like to hear you talk about the lateral thinking and non-traditional thinking versus the, if it's clever, fill it out. Thanks, Doug. By the way, Doug works in our office back in Cincinnati. He's one of the gaslight software people. So thank you for that, Doug. Lateral thinking versus clever thinking. I will confess that I love clever code and that is a failing in me as a developer. I have worked really, really, really hard to become sensitive to clever code and to not do clever things and instead choose the simple over the clever. So that is something I think, and as developers, I think that's generally true. We get into code because we're detail-oriented and we like to make things work and we enjoy doing this and making clever things work together. But that's not necessarily good development. Sometimes we need to choose a simpler way. Now with that in mind, I don't think lateral thinking is about clever code at all. I think lateral thinking is about looking for different solutions that are better in some measurable fashion. Rather than just being clever or shorter or fewer lines, it actually has a specific benefit that may apply to the code that you're working. I think part of the whole no SQL movement is a little bit of lateral thinking reaction against the whole relational thinking. Relational is a good solution for many, many, many things but it is not the best solution for some set of things. And lateral thinking will get you out of that SQL box and put you into the non-SQL realm. There are other things like that. For example, as Rubyists, we love object-oriented code. We love duck typing. We love to write our code using classes. Are there times when writing an object-oriented program is not the right solution? I see some people nodding yes. When is that time? And this answer may differ for everybody. Some people nodding yes over here. Got an answer? When do you not use OO? Yeah, anyone? Okay, extremely high performance might be an issue. Mathematically provable system. Mathematically provable systems might be another time. Lack of state. Lack of state, okay. Why choose one? Why choose one? Okay, you don't have to limit yourself. Again, not choosing one might be out of the box and lateral thinking. It's not about being clever and showing yourself, oh, I thought it was a really cool solution. If you think of a really cool solution, you gotta stop and measure the benefits and the pros and the cons. You have to weigh both sides and make an engineering decision based upon that. Does that help? Okay, thank you. I want to go back to the three-tiered abstraction for a moment. Sure. I feel like often, for example, the earlier examples are innumerable. The bottom two tiers are inextricably linked. So for example, with the each providing the basis functionality for inject map. If you wanna go from a system of a single processor, you need to take your code onto a multi-processor system. It will still work, right? Your code won't fail. But you would get much better performance if you could do it in a map reduce fashion, but inject won't work. I'm sorry, that question is not loud.