 Last time I was here, I sat at Matt's table and he invited me to come to Matt's Sue to, you know, to speak at a conference there. And it was a bizarre experience, I'll tell you. You know, the mayors would come out and, you know, greet all these, you know, American rock stars. And of course, I'm no rock star, you know, I have my business, my tiny little business of four, and it was pretty unusual. How many of you have coded in, let's say, two of the languages on the, or how many of you have heard of seven languages in seven weeks first? So the languages are Ruby, IO, Prologue, Scala, Erlang, Clozer, and Haskell. How many of you have coded in at least two of those languages? Three, four, five, six, okay, five is the max. So nobody's made it all the way through the book yet. And it's really not fair because we have seven languages to get through in about 35 minutes. And it's really a 45 minute talk and the last time I gave the talk, really the only other time I gave the talk, I went about 15 minutes over. But we'll get about as far as we can. So this was the most interesting writing project that I've ever encountered just because of the breadth of the material. The goal of the project was to, was initially going to be to write the same difficult problem in seven different languages. And then after I got about, I don't know, a chapter or two into the book, I quickly decided that it was more important to show a problem that really worked in the language that I was trying to solve. So the problem became to take somebody to the point where they could solve a non-trivial problem in seven different languages. And my first, so the technical challenges just kept coming and they didn't quit coming until I finished the last chapter and then they came a little bit after that. It was just absolutely mind-boggling to get through learning that many languages and get to the point where I could actually teach something interesting in them. The first writing challenge came when I turned from page 34 to 35. And that took me from Ruby to IO. And I thought something is missing here and I looked at the transition and it looked okay. At least the words made a transition but it was hard to move the reader's head from one language to another one with the simple page term. And a brainstorm with my editor for a little bit about this and eventually we came up with the idea of associating each programming language to a character in the movies. And so along the way I want to do that not just with these seven languages but with other languages that were influences along the way. So we're going to go through the languages in the book in chronological order and then talk about the influences. The other languages that kind of impacted them or were going on at the same time. So we're going to start in 1972 in Prolog. So what movie is this? Rayman. Rayman, right. So I thought that this was a good analogy because Prolog seemed so alien to me. When I was writing a Prolog application I would sometimes think, how did it know that? And sometimes I would think, how doesn't it know that? And so Rayman was a great example. And so in 72 if you were writing something, a high performance application, you'd use this language. Shout them out as you know them. It's simply language, right? And just to give you the idea, this is C-3PO. He doesn't carry a weapon. He doesn't cut metal or anything. He just translates one line of whatever language is coming into whatever goes on the other side, right? So one level of English looking code to one level of machine code. So that's kind of what we're doing here. And if you're writing a business application, you'd be using COBOL. And I'm not sure quite which is older, Rockier COBOL. You know, I think they're both going to outlive me. More if you were using a scientific, writing a scientific application. Yeah, you'd be writing FORTRAM. You know, the front of this quote is, if my calculations are correct, when this baby hits 88 miles an hour, you're going to see some serious shit. Back to Prolog. This was a fascinating language. This was the first language that really changed the way that I was thinking in the book. IO did a little bit, but since that was a prototype language, that's close enough to an object-oriented model that I could approach problems in the same way. And for Prolog, I had to change the way that I was thinking. This is a Hello World application. And the way to read it is that the thing on the left is true if there's a set of variables. Well, there's no variables here. They're all constants. But if there's a set of variables on the right-hand side to make that thing true. And the non-trivial problem that I decided to solve in that language was solving a pseudocube. And I picked that language because that was really the first non-trivial problem that I decided to write in Ruby. And I was really proud. It took me about two weeks. And I wound up with, I don't know, a couple hundred lines of code. And I'd built into it eight or nine solution techniques to reduce the problem before I started guessing. Now with Prolog, I didn't have to think about any techniques at all. I described the problem. This is what that looked like. So the first line says that an empty list is valid. And the second line is going to take a list of rows, columns, or squares. And it's going to say that is valid if for the first element all of those are different and the rest of the list is valid. So it's a recursive definition. Then you basically define a board and, you know, being a new Prolog programmer, I kind of brute-forced this thing. So I said these are 81 rows, rows and columns, or 81 elements. And then there's a board. And so I associate the board to a list of these things in the right order. And then I said everything in the board has to be a digit from one to nine. So you can see what I'm doing here. I'm describing the problem. These things are rows. These things are columns. And these things are squares. And all the rows, the list of rows has to be valid. The list of columns has to be valid. And the list of squares has to be valid. And you are done. And this is what the API looks like in use. And that's pretty cool. I mean, we have built a Sunaku solver in Prolog. And I can't count the number of times that I've had to do something that's constraint-based, you know, a logic problem in Java or a rules-based problem in Java where I could have reached for Prolog and had the solution in 20 minutes rather than, you know, taking three or four weeks or even a month to write some Java or some Ruby code. So Prolog was the first language in the book that changed the way that I thought. So, and this is the way the solution comes out. That's basically all I have to say about Prolog. The next language that we're going to talk about chronologically is Erlang. Yeah, this is Agent Smith of The Matrix. Any of you have written any Erlang? So what do you think of this syntax? How many people really, really like it? How many, okay, how many don't? So, I don't like it at all. I think that Erlang is a language with a lot of power but no soul. And that's why I picked Agent Smith at The Matrix. And I wanted to talk about some of the things that I was coding at the same time. And at the same time, here's the first language. What language is that? Basic, right? Sesame Street. I think that the Muppets are actually smarter than the perceived years, right? There are all these inside jokes. But then there's the close cousin of Basic. What's that? Visual Basic, right? Yeah, you guys get that. So, just like with Vacation, starting the trip is easy and everything's a little bit harder than it needs to be. In college, I was writing this language, Pascal, right? Pascal now, Haskell later, right? But I think that like for Scump, this is a wise language. It's not like super intelligent in terms of the features that are in it. But the omissions and the shape of the language really were very well done, I thought. So, it's a wise language. There's another one. Anybody know what that one is? That's small talk. And this is an interesting language because it was getting big at the same time that Joe Armstrong was designing Erlang. And he looked at that, I looked at that language and I saw something beautiful and I saw a way to organize things and think about the problem in a different way. Joe Armstrong looked at small talk and he saw something, he saw a mutable state and it scared him to death. And so he never went down that path. He intentionally stayed away from that path. Instead, he was writing in this language, Fortran, in this language Prolog. In fact, the original versions of the Erlang interpreter were written in Prolog. This was the primary influence. Back to Erlang, this is what a Hello World would look like. And the reason that this syntax is suddenly different than Prolog is that they basically had to make a departure from Prolog to make sure that this could handle the phone switch requirements. And those were that this system had to be live all the time. Now, when I think about scalability and reliability, the first things that come to my mind in language features are things that handcuff you. Things like strong static typing. They're going to scare everybody in this room to death because we're so used to things on the Ruby side. But that's not the way that Erlang approached the problem at all. Erlang approached the problem by a very effective virtual machine that knew how to monitor what was going on with other processes. So whenever a process dies, you are guaranteed to get notification from other processes or two other processes that you're registered to in the same virtual machine. So here's a trivial application that's unreliable. It's Russian roulette. If you get a three, it's going to basically kill the process and anything else is going to print the word click and keep going. So this is an unreliable process. And this is another process that might watch the first process. It returns a new roulette process. And if it gets an exit message, it's going to pass the new message to itself starting another roulette process. So it's like Russian roulette without the consequences. What's the point? So that's all I've got to say about Erlang. And now next chronologically and definitely the last language in the book in terms of the difficulty of actually grasping it is the Haskell language. And you guys get this comparison right off. Here's a language that you guys are never going to get. I have to give you a clue. This was a commercial language that was all about lazy processing. So the clue here is lazy. The language is called Miranda. So when Miranda was created, a whole lot of offshoots of functional programming started to get built to address some of the holes in the thinking of the time. And academia was saying, wait a minute. We know that these ideas are important, but there's too much proliferation of the languages to actually control. So they formed a committee to actually build a language. And to my knowledge, this is the only language that has been effective language that has been created and built in the committee. I know you might say Java, but Java was really created in a small lab and the committees came later. This language was actually built, designed, and it's grown in the committee. Now some of the other things that were happening at the same time, well of course there was small talk, though this was very much a competing thought group, the object oriented languages. Of course Erlang was created. Now Erlang lets you do things to make these beautiful named value databases like CouchDB that we talked about in the last session. It's not a pure functional language. It's pretty close, but there are places where you can kind of step outside of the functional paradigm to do things like Erlang.display to display a string or actually store an element in a hash table. And of course C was a pretty big influence in the universities at the time. With Haskell you really don't have I.O. without shifting into an idea that's called Monads and I really didn't want to get into that concept with a talk this short because they're pretty conceptually difficult. What you really do is define functions that return values and there's no notion of anything like a side effect or a change in state. It's a very pure functional language. It's not a strict functional language and what that means is that you can declare a function that defines an infinite number of basically a sequence and the sequence might be an infinite sequence. This is what I'm talking about. So the first function there is an infinite sequence that represents part of a definition of a Fibonacci sequence. And that idea is that the next number of the sequence is the sum of the previous two numbers. So the colon on the right hand side is list composition. So I'm adding whatever x was, you know that's wrong. Fibonacci xy is y plus, oh no, it's x plus the lazy Fibonacci sequence with y and x plus y, right? So that's an infinite sequence with no bound on the front end or no bound on the back end and we're going to anchor it where the first two numbers are one and one but so I'm defining an infinite lazy sequence based on a lazy sequence with the second function and the third function, I essentially take the first, for the 500 Fibonacci digit, I take the first 500 and then I drop off the first 499 to have a list and I just take the head of that list, right? So these are two lazy sequences that are used to compute the value of a concrete sequence. Now the reason that this is interesting is that when you're using lazy sequences, you're not computing values until you absolutely need them. So they can be very efficient at times. But sometimes it makes programs in the ordering of things hard to reason about. Now the other thing about Haskell is the typing system. Now I have always thought, well my youngest daughter, when she was two years old, said something that to me seemed very wise. She said, I'm allergic to bears, right? I'm allergic to bears. And after I programmed in Java, I thought that I was allergic to strong static typing. I learned that I was allergic to bad strong static typing. Now when I look at Haskell's type system, it is absolutely beautiful and it has very little ceremony, a whole lot of polymorphism and really shapes the way that you think about and reason about your programs and it makes the Haskell compiler more than this artificial safety net. It's a real safety net that can, particularly if you're writing functions in very small increments, it can save you a lot of pain. This is just kind of a hint of the flavor of what some of it looks like. So the first line of code, I'm making a data element called a suit and that's spades or hearts or diamonds or clubs. And the next data element is a rank and that's the top five values for a deck of cards, tins through aces. And then I've got a type of card. This is a type which has a rank and a suit. And then I have a type deck which is a list of cards or a type hand which is also a list of cards. So that's the atomic sense, the very smallest piece of the data system. And then I have a method and that has a type definition and the method defined after that. Well, the type definition says, well, the shuffle method or the shuffle function takes a deck as a parameter and returns a deck. And the interesting thing about Haskell is you don't have functions with more than one parameter, though there's some syntactic sugar to make it look like that. Essentially, all the functions take one parameter and you can curry that out so that I might have an API that says, okay, deal me a hand. Well, you have to know the number of hands to deal and the size of the hand. Well, so you can imagine building a function that takes hands and size and plugging in only the size and that's going to leave you a function that still needs the number of hands to deal. So that's the idea of currying and partially applied functions and what Haskell is able to do is say, this type signature for deal is a function that takes an integer. It's going to return a function that takes an integer and that's going to return a hand. And when you do that, you get an incredibly sophisticated and rich type system that can enable you to solve a whole lot of types of problems, polymorphically, and you don't have a lot of type definitions, you have a lot of inference and so the typing definition doesn't get in your way. Okay, that's about as much as I'll say about any of the individual languages in terms of code examples. Let's shift gears a little bit. Who's this? Yeah, Edward Scissorhands. This is one of my favorite movie characters because he's this this beautiful yet grotesque monster that's the product of bridging two cultures. And if you think about it, that's exactly what's going on with the Scala language, right? I would submit that those cuts on his face are not from the scissors, they're really from the blogs from both sides of the obj. and versus functional debate. So these are some of the influences on Scala, some of the things that were going on at the same time. Of course, Haskell, if you ever talk about a functional language that was created after Haskell, it's a big influence. The typing models, the lazy processing, the way that they handle currying, a lot of things are just beautifully expressed in the Haskell language. What language is this? Java, of course, right? So when you're basing your model on your whole business model, your whole ecosystem on the virtual machine and you're object-oriented, that is necessarily going to shape the way that your language is built. So we're really merging two worlds, the functional world and the object-oriented world. And since Scala is built not only in the virtual machine, but also in the CLR, you have to consider Java's equal twin. What's that? C-sharp, of course, right? I have a demo. Okay, cheap shot. Here's another bridge language. This wasn't really an influence on Scala, but I thought it was interesting to include because it's a bridge language between the procedural and the object-oriented worlds. What's this? Anybody guess? Ada, who said that? That's awesome. So really, Ada, you know, it's Buzz Lightyear, he thinks he's object-oriented, thinks he can fly, he can't really. But get you close enough, right? So Ada really had the important idea of the encapsulation of data and behavior. So that was... So this is what Hello World in Scala looks like. And the way that you feel about Scala really depends on your perspective. Now if you come from where we have come from in this room, a place where we have this beautiful dynamic typing and we have a lot of flexibility and freedom, and you get plopped down into the Scala community, where you're going to feel like Edward Scissorhands laying on that waterbed with all these sharp scissors, right? It's going to kind of feel like too restrictive, too awkward, and too alien. But if you've come from the Java world and you've been coding that way, and suddenly you're handed things like closures and currying and a better concurrency model and ways to actually put the shappals on the languages and the places that it belongs, like around mutable state, you're going to feel really good about the language. And that's kind of the way that the Scala community has grown up now. Okay, I'm going to shift gears. Now we've moved up through 2003, 2004. I'm going to move up to around 2005. Actually, IO was created in 2002. But when Rails came out and 47 Signals came out, Rails came out and what was the application they built? 42 things? Does anybody remember 42 things? I think that's what it was. One of the highest rated things on that site for a long time was learn the IO programming language. And I think that that basically started a resurgence and kind of started the snowball rolling for IO and it's really since petered out, but I think that this is a small, beautiful language that has a lot to offer. It's really a prototype language that is very much a messaging language. So you have like an object, which is a prototype. It's not based on another class. It's based on cloning another instance. And you call messages on that. And that's essentially what the syntax in IO looks like. It's message or it's object and a message with its arguments and that's going to return object and you basically change those together and that's it. You also have another form which is a message or an object operator object and that's translated to this. And that's basically it. But since it's a prototype language and since Steve Ducorte, the designer, put no restrictions on how you could override which is something called a slot which is either data or behavior in a prototype language. It's really a tremendously flexible language. So you can actually go as far as doing something like this. This little innocuous piece of code does something like this. So if the string is nil, I want you to take the call sender which is whoever invoked this method, whoever called this message. So now we're whoever called us. You take one of those slots. It was nil. It's not supposed to be nil. I'm going to retroactively set that value to an empty string. That is like so right and so wrong I can't begin to describe it, right? So wait a minute. I'm taking the person that called me, the object that called me. I'm getting a nil value. I'm not supposed to have a nil value so I just kind of reach behind myself and fix it for you. I'm going to fix it for you. This says everything about I.O., right? I'm going to fix it for this application. But some of the other things that it got right, the concurrency and the performance of this thing should really be kind of slow and awkward because it really is based on a pretty limited object-oriented model. But he got the concurrency libraries right. So you're doing things like messaging which restricts the way that you can go from object to object. It's got an actor model which builds like a queue for concurrent access between the models. It has something called co-routines that will allow... We do mostly preemptive multi-processing. This allows cooperative multi-processing and immutability is built into the model much like it is in Scala. It's also got this very simple syntax to the point that I think that a pretty good analogy is the Lisp language where since everything is in the same format in Lisp you don't spend a lot of time learning the syntax. You do spend a lot of time learning the power behind the syntax. But there's also this idea that data is code and since I am manipulating messages in this very simple syntax it's very easy to do metaprogramming types of things. And since the syntax is small since the libraries are relatively new and small the footprint is absolutely tiny. So when Steve asked, when I asked Steve where have you found IO he said well it's not in production in too many places but then he said well then the places that it was in production were kind of mind-blowing like satellites and Pixar studios and some of their graphics processors and one of the big automotive manufacturers places where they really had to have a small footprint excellent concurrent performance and embedability and metaprogramming. So I really liked my time with IO. Okay. So the last language that we'll talk to you before we kind of wrap up with Ruby is closure. More in a second. That's what we already do. Enclosure I think is really it's Lisp on the JVM but it's actually a little bit less than that and a little bit more than that. I say it's a little bit less than that because there's some limitations in this Lisp that the scheme folks are really kind of railing about. Things like the lack of, you know, we have iteration, what do you have in functional programming? Recursion and what's the big optimization? Tail call recursion, right? So in closure they solve this problem by making tail recursion optimization very explicit and very wordy, right? But it adds some things that are incredibly powerful that the original Lisp dialects don't have. First there are a couple of syntactic tweaks that I put brackets in some places and I put braces in some places so you don't have this endless chain of parents and it's a little bit easier to read. Second I put some restrictions on when I can use macros and reader macros so that you don't have this proliferation of dialects. Third there are some excellent currency controls that have been invented in the last, especially in the last 10 years like software transactional memory and these ideas are built into the closer library very artfully and tastefully. So it's a powerful Lisp dialect on probably the most important deployment platform of our time. So the big influences are, of course, Java and Lisp. But you can't talk about just Lisp. You have to talk about, you know, the dialects of Lisp, right? There's common Lisp and Scheme well maybe there's more than a couple. Maybe there's more than a couple. That one's Microsoft, right? And the hello world looks exactly like you would expect and I don't really want to break down closure by syntax. I really want to talk about this marriage between the JVM and the beautiful language. You know, the JVM, the whole Java community is, it really needs a drink, right? It needs this, it needs an injection of fun, it needs an injection of intelligence in the language. I mean, they're still talking about how great closures are, right? You're welcome to, like, what, five or 10 years ago. But still, it's the best deployment environment in the world. The JVM is solid and it's robust. It has some limitations now. Like, you know, the inability to do tail call recurs an optimization. But it's still, it's out there. Politically it's the best deployment option in the world. And, you know, until the very last part of the last Star Wars episode, Yoda was in exile, right? That's like the Lisp language. It's been in exile for years. And as soon as it starts to get a little bit of momentum, then we just see another splinter faction kind of splinter off and break that momentum down into, like, different dialects. But still, the ideas behind Lisp are powerful and compelling. And I think that, given this marriage, it really has a chance to work. And this has, you know, I'm not sure if I like the idea of everybody coding Lisp or not. But I think that we have, we stand a pretty good chance of seeing a successful marriage here. And that's all I have to say about Lisp for a little bit. So which brings us to Ruby. Yeah, this is Mary Poppins and everybody gets it, right? Ruby is about, like, the love and the passion. Like Mary Poppins was about the magic and the love and the passion. Hello World and Ruby. That says it all, doesn't it? I mean, who would do that? Who would do that? We would do that. We would totally do that. But when all of a sudden done, I just wrote a book called Seven Languages in Seven Weeks. And, you know, you can't help but kind of question what you're doing, right? You see all these cool things and all these cool languages. And at the end of the day, I thought, Ruby is still where I need to be. I don't want to talk a little bit about that. First from the perspective of the things that I really like about the language. So I remember when there was like this big community that was dealing with the Ruby community, and you know the one I mean. What's that? What language is that? Python, right, right, right. So, you know, there's one way to do everything. There's the right way, right? And Ruby kind of broke that down. Ruby kind of injected the Mary Poppins, right? The magic and the fun back into the programming community. And it was done with a little bit more venom than I would prefer, like when the rail stuff started out and, you know, the idea was to poke a finger in the eye of the Java establishment. But I still really appreciate the way that Ruby brought the passion back to programming. Can I hear a hand about that? Yeah. It was very good for my career. I probably would have moved on to a second career if I hadn't found Ruby. But still, there are some things that I saw in the other languages that I would like to have in Ruby. I'm starting to understand that there's a little bit too much of this language in Ruby. What's that? Pearl, right? So there's still not, sometimes there's not enough discipline. And it shows up in places that, you know, can cost me a good amount of time. And I think that there are some fundamental problems with the way that we code object-oriented applications. I'm almost ready to say that the object-oriented programming paradigm was a mistake. Not quite, but I'm almost ready to say that. I think that as the multi-core systems become more prolific, we will suffer. And we'll suffer because we're programming in a paradigm that doesn't handle concurrency very well. And so in Ruby, we have to be pretty careful about that. So there are some limitations. There are some things that I'd like to see added to Ruby, like better regards for mutable state, ways to turn to lockdown classes so you can't, so after a certain point, after the environments ramped up, I can lock things down and say, okay, no more touching these classes. We're immutable from this point. But I guess at the end of the day, something keeps me coming back to Ruby. And if we're able to do more and more in a scripting way and we're able to tie into these key-value databases that are written in languages like Erlang, maybe that's enough because after all, a spoonful of sugar does help the medicine go down, right? It's the expressiveness and the power of the language that lets us turn that program into our design document. And that's kind of the holy grail for software design. So that's about all I have to say. I'm only two minutes over. I guess I have a few minutes for questions. Yeah. So writing the book took me not quite a year and a half but kind of closing in on that. And yeah, I had to learn a lot of languages for the book. So I knew two of the languages pretty well. And so I knew Erlang pretty well and I knew Ruby pretty well. IO was pretty easy to pick up because it was a prototyping language. Everything else was really a struggle. The hardest one to learn was Haskell and in fact, I put that one off for nearly a month. I didn't even start it because I was afraid of it. And the Monads especially, so in Haskell and in a lot of functional languages, the easy things are hard and the hard things are really easy. So you spend a lot of time writing and teaching about things that object-oriented programmers don't think they have to care about. Like how to store something in a hash table. Well wait a minute, that's a side effect and that's a mutable state and that kind of blows up everything that you're doing. So you wind up writing a lot of things about the software transactional memory and the different ways to handle that in all the different languages. So that was demanding. So it was a hard book but the funnest book I've ever had to write. We did a poll of the readers at the Pragmatic Press and all of these languages made like the top 15. I really took the top nine and I cut Python because I only wanted one object-oriented language and I didn't want to spend my time like learning another object-oriented language well. I cut JavaScript because it's really too many different programming models at once and I thought that I had to add back a programming language and so it was between Lua and IO and I picked IO and I'm kind of glad I did. If I do another book Lua will probably be in there and Python will probably be in there too. But everything else was basically right down the list of the languages that the Prag leaders liked. Thank you.