 Alright. So it is a real pleasure for me to introduce Jim Warrick. I'm sure Jim feels like I'm like a little stalker, but he's my favorite person in the Ruby community. I love his approach, everything he's done. Rake, Builder, I think they're just elegant and beautiful. And last year we talked about who he wanted as a keynote. And I always said Jim. And this year, you know, I got Jim. So it's a great honor that I introduced Jim Warrick. Thank you very much. What he didn't mention, there's a story behind that. You see, because last year I was working for a company that didn't allow me to go out to conferences. I'd have to take vacation time and foot the bill myself. This year, however, I'm working for Edgecase and they allow me to do all kinds of things. So instead of approaching me, Mike approached my boss and asked if I would come out. I only learned about it afterwards. Before we start, I've got to show you my background. This is actually the standard background I have on my Mac all the time, just slightly modified for tonight. Are we coming up? Okay, we'll go back into this. Mike wanted me to come out here and I asked him, what do you want me to talk about? He said, I want you to talk about something that is of vital importance to Ruby programs or the programmers in general. I said, I have just the topic. I know exactly what every programmer needs to know. And that's going to be my topic tonight. You've seen this before, right? Keep it simple, stupid. Thank you. I added an extra slide just for you. No questions. For Mike, since you've waited a year for this, just so happens I have a backup talk prepared. Since we're done with Shaving with Akam, tonight's talk is going to be whatever I want to talk about. I'm going to start off with a quote. I'm going to read to you. It's going to be on the screen too. It's a lengthy quote, so bear with me, but I think very early in my programming career, I read a book called The Mythical Manmunt by Frederick Brooks. And there was about a half a page in there that just really struck home and just resonated with me when I read about what he said about programming. This is it. He said, the programmer, like the poet, works only slightly removed from pure thought stuff. He builds castles in the air from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and to rework, so readily capable of realizing grand conceptual structures. Yet the program construct, unlike the poet's words, is real in the sense that it moves and it works, producing visible outputs separate from the construct itself. It prints results, it draws pictures, produces sounds and moves arms. The magic of myth and of legend has come true in our time, one type, the correct incantation on a keyboard and a displaced screen comes to life, showing things that never were nor could be. The computer resembles the magic of legend in this respect too. If one character, one pause of the incantation is not strictly in proper form, the magic doesn't work. Human beings are not accustomed to being perfect and few areas of human activity demanded. Adjusting to this requirement for perfection is, I think, the most difficult part of learning to program. Frederick books the mythical man month essays on software engineering. Here, when we start at the very beginning. I was a physics major in college back in the late 70s. Boy, that dates me. Don't anybody say I wasn't even alive back then? Yeah, I saw that. And so I'm a physics major and I needed a particular math course that didn't happen to be offered that semester. So my advisor says, why don't you take this introduction to Fortran course? It'll be useful. And who knows, you might enjoy it. Little did he know. So I signed up for the course introduction to Fortran and I went in the class and I sat in that class and I knew a little bit of Fortran because as a physics major, I had to write programs that would graph the results of my physics experiments. So I knew enough Fortran to drive a plotter and to make graphs. So I sat down in class and the instructor the very first day started writing on the blackboard. And the stuff he wrote looked like this. Now I knew Fortran and that had way too many parentheses to be Fortran. And I sat in that class and I puzzled over this stuff that he wrote on the blackboard. And it took me about three classes. I would come in, I would look at that and say, huh, what is this? This is not Fortran. Why am I in here? I don't get it. And suddenly on the third day something clicked. And something went turned in my mind. I say, I get it. I see what he's doing. I've been hooked ever since. I took all the computer science courses. They would allow me to take and still finish and get a physics degree. I loved it and I've been in programming ever since. So I remember this was the piece of code I saw on that day that it kind of clicked in my mind what was going on. And we're going to talk about this a little bit. Do you recognize what this is? This is Lisp. And Giles mentioned Lisp earlier in this conference. And so we're going to run with that theme just a little bit. Lisp is a list processing language. You process lists of things such as a list of fruits, apple, banana, and pear. They're very simple to construct. Parentheses surround some lists. There's space delimited. Nothing else you need to do to make a list. So this is a simple list of three elements. This is another list of one, two, three elements. The third element happens to be another list. And let's ignore the funny tick marks for right now. We're going to get back to those. The difference between the first list and the second list is the second list is also an expression that can be evaluated. And Lisp has very simple rules for evaluating expressions. It says, look at the first thing in the list. And if it's in the list and if it's a function, apply that function to the rest of the list after you evaluate the rest of the list. So we apply the function member. And if we go back a screen, there's member. We actually define member right here. So member asks the question, is banana a member of the list apple, banana, pear? And the answer in this case is true. Well, that's very interesting. But why do we need ticks? We need ticks because Lisp evaluates everything in the list. It figures out if the first thing's a function, then it evaluates banana. Well, what's the value of banana in list? It doesn't have a value. So if you put a tick in front of it, the value of anything behind a tick is that thing. So it's a quote mark. So we quote banana and we quote the list apple, banana, pear. Why do we have to quote? Well, let's give you another example. Let's surround this apple, banana, pear in the first with the set queue fruit and that sets a variable in list. So the variable fruit has the value apple, banana, pear as list. And if we evaluate the list member tick broccoli and give it fruit, notice fruit is not ticked. So we'll go grab the value of that. We'll grab the value of tick broccoli, which is just broccoli. And we ask the question, is broccoli a member of the list apple, banana, pear? And the answer will be false, spelled N-I-L in list. We're used to that in Ruby anyways. Nill is false in Ruby as well. So that's not so surprising. And that's probably where it comes from in Ruby. Cool. So syntax in list is trivial. There's only two things. Addems and lists. We've seen the Addems. They're just identifiers for the most part. Numbers are also identifiers. But we're going to pretend for right now that numbers don't even exist. And lists are just things in parentheses. And you can nest lists. So the first thing is a list of A, B, C. The second thing is a list of two elements. The first element is a sublist of A and B. The second element is the Adam C. We've seen member named list before. And this empty parentheses thing here, that's the empty list. And in list, there is this very bizarre relationship between nil and the empty list that they are the same thing. Nil represents the empty list. Another kind of bizarre choice in list. I think Skeen, which is a dialect of list, doesn't have this bizarre relationship. But this is the list I learned back then. So what can you do with lists? Well, there's some basic functionality you can operate on a list. There's car, which takes the first element of the argument that you pass it. So if you give it a list A, B, and C, it will return A, the first element. There's Cutter, CDR, which returns everything but the first element. You can think of these functions kind of as head and tail. And why they're named car and Cutter is a historical accident. The machine they first implemented, Lisbon, had two registers called the address register and the decrement register. So the car was the contents of the address register and Cutter is the contents of the decrement register. A little historical trivia for you. There's a third function called cons, and that means construct. And we're going to construct a new list out of two things. And the list, the head of the new list will be the first argument, in this case Apple. The rest of the list, the tail of the list will be the list in the second argument, which is A, B, and C. So you have cons, Apple, on the ABC, you get Apple, ABC. Very simple. Take apart lists, build up new lists. Very simple list construction operations. We've also got a function called Eek that compares the value of two atoms. And that only works on atoms. If they're the same atom, Eek will return true. If they're different atoms, it returns nil. If you try to use Eek on lists, you get funky results. Sometimes it returns true, and sometimes it returns nil, and it's really hard to tell when. So you just don't use Eek on things that are not atoms. Just one of the rules of the game. What else do we have? Ah, we've got a function called atom that asks the question, is my argument an atom? And if it is true, if it's not false. Five functions. Car, Cudder, cons for list manipulation. Eek for testing equality of atoms. Atom to test whether you're an atom or not. And then on top of that, we have some special things. These are special forms. These are not functions in lists. They are handled in a special manner. The first one we're going to talk about is something called con, which is the condition on the list. Con takes a list of ordered pairs. And the pairs have a condition and a value. And con will go through and evaluate the condition, and if the condition is true, it returns the value. If the condition is not true, it'll move down to the next item and see if its condition is true and return its value. If not, it'll continue going until it gets to the end of the list, in which case if nothing is true, it'll return nil at the very end. So it's a big if else type structure. The key is that it's not a function in that you only evaluate the value if the condition is true. List functions, you always evaluate the arguments. On con, you do not always evaluate so it's not a function. And that's an important distinction. Lambda is a function abstraction. We've seen lambda in Ruby, right? Same thing. Creates an anonymous function with arguments and expression that gets evaluated when you call it as a function. This is page 13 from the Lisp 1.5 programmers manual. This is the Lisp interpreter, minus some helper functions that are pretty trivial. Oh, we got power. Okay. I have to walk over and touch keyboard occasionally, okay? Okay. This is the Lisp interpreter from page 13 of the Lisp programmers manual 1.5, ancient, ancient, ancient version of Lisp. But if you notice, you see things like Adam and Eek in the definition of this pseudocode. And even though it's not written in parentheses, this is actually could be transliterated to the parentheses style of Lisp very trivially. So essentially, this is the Lisp interpreter in Lisp itself. And the thing to point out is that you can see the CarCutter cons out on Eek, lambda, label I'm just going to skip over. And quote, quote is how the back tick or the tick is handled and conned right there. So this Lisp interpreter handles all those primitives I've shown you. Now the interesting thing about those things, you take those five functions, the two special forms, and the Lisp way of applying functions, this is a Turing complete language. Which means that you can calculate anything that any other programming language can calculate. This language can be composed of five functions, two special forms, and an evaluator can calculate anything as well. It is Turing complete. Very simple constructs put together in amazingly interesting ways. Now how can you work? How can you? We haven't mentioned numbers at all, right? We haven't mentioned anything like ands, or ors, or nots, or anything. So how do you put these simple things together to get that kind of power? Let's look at this. Consider the logical not. Let's write logical not as a function. While we use lambda to build functions, it takes an argument a. We run it through conned. If a is true, then the answer to not will be false or nil. So conned, a, nil, nil's the answer if a is true. If a is not true, we'll go to the next pair. And it is true. T is true. So the answer is true. So if a is true, the answer is nil. If a is nil or not true, the answer is true. So it's a logical not operation written using nothing but lambda, con, and some values. And is the same way. If a is true, then and is true. If b is true as well. Otherwise, and is false. And or is written the same thing down here. I'm not going to run through the logic, but that actually works out to be the or function. So we can build up more interesting pieces of logic from the basic pieces that Lisp gives us. How would you use this? Well, this is the function not, right? Well, Lisp says if this appears in the first position of a list that you're trying to evaluate, that is the function to use. So all we need to do to call this function is to construct a list and pass it an argument. Now it's a very verbose function name, but this is how you call an anonymous function without a name in Lisp. The blue part, the lambda part that's in blue, is the first element of the list. That's the function. You pass it at the t and you apply the lambda to the t and you get nil. You do the same thing and pass it a nil, you would get true. So we can call these lambda things, but you know what, if I have to go around and write this out every time I want to write a not function, that's going to get really, really tedious. I wonder if there's a way that we can attach this lambda anonymous function to a variable somehow and reuse it. Well, it turns out that's not too hard. Consider this expression using and or not. Function names that we're used to. How can we get our definitions tied to them? Well, we'll embed them in a lambda. The lambda takes the arguments not and or. There's three separate arguments. And then we make this whole thing, this whole lambda, the first element of a function that has three arguments. The first argument is going to be the lambda for not. And when you call this function, the lambda for not will be bound to the parameter not and that first lambda. Likewise, arguments two and three, we'll expand out to our ands and ors and they will be bound to the names and and or. So when we evaluate the gray part, the and t or not t, we will look up and we ask if it has a value. The value is yes, it has the value of that first argument lambda, which is the not function. We apply the not function or excuse me and would be the second function, but you get what I mean. And everything ties together. Now if you had to actually write a system like this would be really, really tedious. And saying something is turning complete does not mean that it's not tedious to write code in. But if you work in a real Lisp system, what you're really working with is the gray part. And that big lambda and all the other stuff is kind of there already for you to run. It's as if someone had written all that for you already. And you can think of a Lisp system in this manner. Taking tiny building blocks and putting together in interesting ways. There's three pieces of this that I noticed when I was looking at Lisp. First of all, it's got a small core, just atoms in Lisp. That's the only thing it works on. It's got simple rules for manipulating that small core. The five functions, the two special forms, or excuse me, five functions here. Next part, it's got powerful abstractions for building new things out of those simple rules and small core. And the lambda being the main abstraction we use in Lisp. That is why Lisp is such a powerful language. The basis is trivial, it's simple. Anybody can do it. I wrote a Lisp interpreter that page 13. I wrote that in Ruby the other week in about an hour. And I had a Lisp interpreter. Didn't do much. It's on my blog if you want to go check out a Ruby version of that page 13. That was kind of fun. I always wanted to write a Lisp interpreter. So powerful abstractions. And that's what takes simplicity and makes it powerful as adhering to these three principles. After college, I went to work down at RCA missile test project down at King Canaveral. I was working down there during the first few years of the shuttle launches. And we didn't do software on the shuttle, but we did software for the radar ranges that would track the shuttle and track the various launches down there. I was going through my files the other day. I found this. This is the cordial configuration display and logging system. Detailed design specification A network communication protocol. This is a design spec I wrote while I was down down there in my very first job out of college. It was humbling to read this. I just took a course on ethernet, right? And I learned about the seven network layer protocols. So, of course, our system had to have all seven layers of protocol in it. And this is the state of networking back then. The physical layer, the data link layer, and the network layer were provided by a vendor. Everything else. Above that, we had to write ourselves. TCIP was invented, but it really hadn't broken out of the Unix world very much. And we didn't have any Unix machines at the time. Didn't even know about TCIP in particular. So, we were working on this question. The cordial system was designed to take information and display it to really important people that had absolutely nothing to do with the launch. Like generals. Not the people who actually ran the launch like the launch controllers or the radar controllers or the people in the control room. They didn't need this stuff. This is for the generals sitting back and they wanted to see what was going on during the launch at any time. So, we gave them status information. Okay, this is a snippet out of the spec. I just want to show you guys how good I was at writing specs, even that early in my career. This, I can't believe this. This is an algorithm that we would use at some level in the protocols we were designing. I have no idea if this works or not. It has no unit tests. We didn't even write it in real code. It was pseudocode. Why did I do that? Because I didn't know any better back then. I can't believe that. Let's skip that. The cordial project was plagued by a particular problem that many systems have. That's right. This was a non-critical system. We didn't have a budget to do anything, but it was important because it was for the generals. So, we got all the hardware equipment that nobody else wanted. I found this image on the, I think it was the computer, ancient computer museum website or something like that. I was paging through that and I said, that's it. That's the computer we had to work with. It was an 8080 microprocessor. It had five and a quarter inch floppies. Has anyone ever used, who has used five and a quarter inch floppies? Okay, good, good. It had, it had assembler. I think it had basic too, but I wasn't a real fan of basic at the time. And it had a, and this is critical. This is the thing to remember. It had a memory map color graphic system. And actually, while I was reading about this system on that ancient computer museum website, it had this little ditty that I had totally forgotten about, but it's absolutely correct. When you deleted a file, it repacked all the remaining files back to the front of the disk. And it used the 8k of screen RAM for a buffer to do it. It's absolutely right. When we would delete a file, the screen would go crazy with colors as it moved stuff using the screen RAM as a buffer. I'm really thankful that site's there to bring back these wonderful memories. This is one of the computers that we would have to display status information on. The other system that we used was a PDP 11, and it had a graphics terminal that was really fancy, but you communicated it through an RS232 port, and you sent it very specialized graphics commands. So we have two different kinds of display systems. Absolutely, totally, nothing in common about them. One was memory math, one was kind of a command-based vector system. Different architectures, PDP 11 versus an 8080 microprocessor. No language in common. This one had Fortran and Ensembler. The other one had Ensembler, but they were different kinds of assembler. So what were we going to do? How in the world could we build a system that would display the same graphics on two different two as different machines as this? This was the answer. How many people can read this? No one. Excellent. You're in for a treat that. I hear you groaning. Actually, actually, yeah, it's a bizarre language, but you know what? It really worked for this purpose. Let me tell you a little bit about Fourth. You've never heard of Fourth. It's a very simple language. Here's a string of Fourth codes. Six, one plus. The first thing you notice is, what is the plus doing at the end instead of in the middle? That's because Fourth has reversed Polish notation, just like HP calculators, which I had one at the time, so I thought this was really cool. We have a next pointer that points to the next instruction to be executed, and so we execute that instruction, which is a six. Six pushes a value of six onto the stack. The one numbers push themselves on the stack, so one pushes itself on the stack. Then plus takes two things on the stack, adds them together, and pushes back seven. So it's a stack-based language pushing things on and off of the stack, just as you execute. It's just a trivial language to implement. We'll talk about this. Six, one plus. Suppose I want to take this one plus and refactor this. Fourth was the language I learned about factoring and refactoring. Truthfully, it was a great language for that. We wanted to refactor this into a function. The fourth term for that is a word. We want to create a word that implements the one plus operation. We'll call it add one. So to do that, to define a word and fourth, use a colon, which introduces a term, and a semicolon, which terminates a word, and the first thing after the colon is add one, and then you just put the code you want to execute in the middle. No parameters, nothing special, very lightweight fourth definitions. Fourth definitions longer than about three lines were really, really long. Both definitions were probably about a line long. So you built up your program in little itty-bitty pieces. So how was this implemented? Well the green part created a entry in our dictionary. Fourth was composed of words. Words are in the dictionary. They are arranged in vocabularies. The dictionary entry started with a link to the next entry in the dictionary. So the add one had a link to whatever word came after it, whatever word was defined immediately before it, and then had the string add one in it. And then after that was what was called a code word, and that code word always pointed to machine instructions. We'll get back to the code word here in a second. After that was the address of the code word of the word that implemented the number one. One was actually a function in fourth. Interesting. After that we have the address, oh excuse me, the code word of one points to machine code as well, and that's critical. After that we have the address of plus, which points to the code word of the word that defines plus, and after that we have the address of semicolon, which points to the machine code of the word that implements the semicolon action, which is return from this function. So here we have push one on the stack, add it to the top two elements on the stack, push the result back, and return from this function. Encoded as a fourth instruction. Now to execute this, all we needed to do was to have an instruction pointer that pointed to the next place in our fourth code base to execute. Take that at, take that location, look up the address of the code word, and then jump to wherever the code word pointed. So it was a look up, look up, and jump. That was all you would need to write a fourth interpreter. This is called a threaded coded, a threaded code implementation, and it's very, very small, very, very concise. The interpreter on a pdp11, which is what we were using for our graphics terminal, looked like this. Two instructions. So I was using an interpreted language that could add two numbers in three machine instructions essentially. Two instructions for the interpreter, one instructions to add two numbers together. Now the 8080 implementation was a little more complex because it was an 8-bit machine and we had to add the two halves of the 16-bit word separately and handle all that. But it was still pretty simple for a 16, or 8-bit microprocessor. Here is the implementation of the plus word. Now it can start it off with head. I forget what the numbers meant. This is actually what was in the assembler itself. This is a macro. The green part expanded to the link and the string plus in the code base. The plus thing I think is what created the yellow code word there, and then the red implementation was the add the top of the stack to the next thing in the stack, leave it there, and then the next the jump back into the interpreter step. So it was trivial to build up these tiny little definitions in assembler. Okay, so what did a force system look like? A force system had a handful of primitive words like plus, minus, fetch from this address, store at this address, print this number out. Very simple words defined at a primitive level. Built upon those primitive words it had higher level words like f while then emit, print a number, you know just just basic operational things. Now the key thing is to a programmer the primitive words and the higher level system words looked identical. As a programmer I couldn't tell whether I was calling a primitive or not. It was no difference to me as a programmer. Above the system words we had a whole slew of vocabulary words. We had an interactive compiler, we had virtual storage, we had a screen editor, we had an inline assembler and this was all running on a machine that has less memory than your wrist watch. I was actually I my very first computer was a single board computer. It had 4k of memory and I was running a fourth system in 4k of memory. So very very tight very very concise perfect for the machines of that day. So thing to remember these primitive words are machine specific. If I wanted to port fourth from one machine to another all I had to do is rewrite the primitives and everything else was system independent. It didn't matter they were written in terms of the smaller primitive words and in fact when a new machine came out it was often fourth that was the first language that was available on those new machines. I think IBM even had some machines that used fourth as a boot up language. It was in ROM that you start the machine up and there's fourth and you use fourth to load the rest of the operating system. A lot of the Nintendo console games were written in fourth back then. So how does this solve our graphics problem? Well it's very simple we just had to add some used high-level user defined graphics words and we had to write a graphics driver that was specific to the kind of either the memory map driver either the memory map driver or the terminal based driver. We wrote those as primitives down to the bottom level we wrote higher level machine things and actually worked out great. We implemented the whole thing as far as graphic drivers and on both machines and we were displaying the same thing. Now unfortunately I didn't stick around with the Hordra project to the very bitter end so I don't know if it was an ultimate success or not but this piece of it actually worked out very very nicely. So again a small course small simple words simple rules just had a two instruction interpreter and a powerful way of building instructions upon the lower words. Actually it was a lot of fun to work with. If you have heard of a lately there's been some revival in the interest of fourth and I think the language is it refactor? Factor. Factor is essentially fourth written on top of some really fancy graphics primitives. It doesn't really fancy things but it's fourth underneath so it's a cool thing to check out. After leaving RCA down at Cape Canaveral I came back to the Midwest and wanted to get close to the family. I worked for the General Electric aircraft engines company in Cincinnati Ohio. We made jet engines. I love this jet engine. This is cool. Have you ever seen this? This is called a UDF an unducted fan engine. If you've ever looked at jet engines the military engines are long and skinny. Commercial engines are big and fat. The reason is the military engines want power so you make them long and skinny to get that power. Commercial engines want fuel efficiency and if you flow a lot of air around the jet engine increases fuel efficiency except it takes a lot of weight to make that cowling and the fans that fit around it to make the air flow around the engine around the turbine portion of the engine. So NASA had a great idea and they collaborated with General Electric to build an unducted fan which had rotating blades on the outside of the engine that would pull the air around the engine keeping the center core a little slimmer and what was needed reducing weight and increasing fuel efficiency at the same time. Really cool engine. Never caught on. I think it had something to do with those spinning blades on the outside. They actually conducted fuselage penetration tests. Ever see on Mythbusters how they have those air cannons that launch things and stuff? That's what they did and the blades spinning at full speed would easily pierce entirely through the entire fuselage of the engine so that might have something to do with why it wasn't adopted. But when I started working there the big thing in jet engine design was digital controls. Back in the old days jet engines were controlled by a series of cams and levers so when you said give it more fuel mechanical things engaged inside the engine and they would lift levers they would increase the fuel flow and they would they would carve these cams out and design these cams so they would get just the right amount of fuel with just the right amount of air flow to get the right kind of power requirements. It was tedious. Took incredibly long to get those cams designed right because you'd have to build an engine try it out. The cams are wrong. Take the cams back to the engineering department re-grind them down and bring them back. The digital engine controls would keep those schedules internal in a computer and the computer would monitor the airflow the pressure and the temperature and when you said give me more power the computer would say okay this is what I have to do at this set of temperature pressure and you know whatever else it was monitoring. It was trivial to change the schedule the computer. A guy sat in a terminal connected by an RS-232 port to that thing and typed in the new fuel schedule wow the engine was running. I was sitting in a test cell once and he missed a comma in the fuel cells schedule and the engine and the guy sitting at the controls just shut down really quick. They were not allowed to change the fuel schedule during an engine run after that but it was still easy to change as a little too easy as that point proves out. So we would get digital engine data from the Fedec and the software I wrote would talk to the Fedec and pull that data down and we would convert that data to analog signals. Isn't this typical of a legacy system? We have beautiful precise analog data and we convert it or precise digital data and we convert it to analog so you can plot it on charts. So we would take the digital data and run it to what were called digital to analog converters DAX and DAX are essentially were simple. You write a value to the DAX between one and a hundred and it would set a voltage out the output of that that the grapher would sense. So we very trivial to write. So we had a DAX software driver that would drive all the DAX in the system. We had a table that was updated with the new digital data that would come in and we had a data update software that would update this table. I want to point out that these two pieces of software were in separate threads and that the data table in between them is what's called shared data. Now what happens when you have two threads reading and writing and at least one of the modifying shared data. You have the potential for race conditions. What's someone saying? This was an exciting project to this was the first time I had worked with threads and I'd gone to class on it. I knew about all the all the things that involved with it. I knew what to do. These set up locks around these two accesses to prevent them from overriding each other and this worked great. So that we had to update those DAX every 20 milliseconds. That's 50 times a second. Which back in that day was pretty dull and fast. In fact our software was too slow. Actually just barely fast enough and we couldn't ask it to do anything more without it running out of time to do all this updating. We did a lot of analysis and analysis and profiling and we found out that these system locks were causing a lot of slowdown. So idea. The system locks are too slow. Let's not use them. Now I know you're supposed to be very careful about access shared data and we knew this. So we sat down and we analyzed the situation. We went down to the machine instructions. We timed the machine instructions. We knew how long they would run and if we did it in this particular fashion we calculated that it was very, very, very reliable way to update our software. In fact, there was one chance and a million of it failing. One chance and a million. You'd write an airplane that had one chance of a million of failing, right? I mean it's way higher than that in real life. One chance and a million seems miniscule. So we entered the test phase. We put the software in the test cell and we started using it and trying it out. Not on real engines yet but we were just just testing it. We found that the system failed once a day. Sometimes twice a day but not more than that. Just about once or twice a day. Let's do some math. Okay 50 times a second, by 60 seconds in a minute, by 60 minutes an hour, by eight hours in a working day is about a million and a half. Once or twice a day. Dog got it. Unfortunately we found this out in testing. We went back and we revised the way we're doing it. We came up a way. We came up with a way of updating the data tables without using system locks that was 100% reliable and it was just just just just I don't even remember how we did it now but we came up with some trick and it actually worked out very nice but this is a point. Threaded programs are hard and they are hard in a way that people who don't write threaded programs on a daily basis fail to comprehend and the reason they're hard is that shared data and you're updating and the possibility for race conditions that happened once in a million times. Your unit tests are not going to show once in a million type of failures. They'll only show up at the worst possible time or in our case under testing. So shared data bad. Too many variables. So if you want to write concurrent programs and we want to write concurrent programs now because hardware concurrency is the big thing. It's what's getting us that extra boost of power now. Just throw more cores in the system and take advantage of it. But writing programs I was writing a threaded program on a single processor. Think about the race conditions that are inherent when you have two processors running at the same time on the same memory. How do you deal with that? How can you write reliable threaded multi-processing programs and take advantage of that concurrency? How many people have played with Merlang? Anybody in here? Okay I've only dabbled in it so this is going to be a very high-level look at Merlang. It's very very interesting what they've done with that. It has four types essentially. Adams which are lower case identifiers. Variables which begin with an uppercase letter. Tuples which are curly braces and lists which are in square braces. The difference between a list and a tuple is that tuples are always the same size. If I have two tuple a and b you cannot connect anything to that it's always going to be two elements. Lists can grow like in like in Lisp. We can cons on to a list you can do the same thing in Erlang with its square bracketed lists. Okay it has this operation that looks awful a lot like an assignment. So if you see this in an Erlang program it assigns the tuple gem comma y rick to the variable name and that actually works. However if you do this assign John Doe to name it fails because name already has a variable. Excuse me name already has a value and you cannot reassign variables in Erlang. Variables don't. I wonder if people told the guys in Erlang that variables change value. It's a little naming thing here that I'm having a problem with. That's just the way Erlang works. Okay so you cannot change the value of a variable once it's been assigned. And the thing that you're doing right there is not really an assignment at all it's really a pattern match. You match the thing on the right hand side against the thing that's on the left hand side and if it's an interesting thing like a tuple here the atoms have to match exactly and any value that matches against a variable will be stored into that variable. Assuming the variable is not already assigned. The variable already has a value it has to match that value. So it's a complex pattern matching operation. It's not assignment at all. It's pattern matching even though it looks like assignment. So this last operation since name contains gem comma wiric and the gem matches the gem the wiric will be assigned to the variable last. So it's a way of pulling elements out of a tuple. If there were two variables in that left-handed tuple you'd get both elements out of the right hand tuple assigned in the individual variable. So it's a way of breaking things out of tuples. Cool. And you do the same thing with lists. You have a list with this vertical bar between it and it will match the head and the tail of the list on the right side. So this thing will pull out A and stick it in H which is the head and pull out the BC as a list and stick that in T for tail. So it's kind of like cuddler and car kind of combined into one operation. So you can pull lists apart you can you can put them together just like you can in in the list. Now this is a function definition in Erlang and even though it looks like it's three function definitions it's really only one function definition. Now let's see how this works. Suppose you call member and you give it the atom B and the list A, B, and C. Erlang will go through the list of member definitions declarations here and say does this match the first element? Well the first element has an item which could be anything because it's a variable and an empty list. Well the second element is not an empty list so that won't match. However the third one will match. The second one doesn't match because the head of the list has to be the same as the first argument. That doesn't match. So we go down to the third one which says the first argument can be anything and the last argument has to be a non-empty list that has a tail. So the tail goes into T. We throw away the head of the list because we're not interested in it and then we recurse and call member with the item and the tail of the list that got passed in. So now we're calling member again a second time but this time with a different argument list. We go through and the first one doesn't match because BC is an empty. The second one matches because the head of the list is now matching the head by the first argument of the member function. So we're going to trigger the second thing and return true. So the value of the asking the question is be a member of the list ABC is true. Bizarre. No loops. No assignment statements. No if then else's. It's all done with pattern matching and recursion. Well the big thing about Erlang is creating processes. So let's see how that works in Erlang. Let's say we have a client and a server and the client is going to form up a message and send it to the server and it does it like this. Here's the double function. It takes two arguments a PID which is a process ID. We identify any process in Erlang by its PID and that gets passed and it's going to be the PID of the person we want to send the message to. And we pass in a value N which is going to be a number. And then we use the bang operator here. PID bang. And PID bang constructs the tuple. Self is my own PID. It's the PID of the currently running process. So we construct a tuple that has my PID self as the first element of the tuple. The second element of the tuple is another nested tuple that contains the atom double and the number that we pass in is an argument. And that tuple gets sent to the server PID. So PID comma double comma three in a nested tuple gets sent to the server. And then we sit and we wait. We wait for an answer to come back. And we want to receive a PID and then a tuple with the word answer, the atom answer, and the result. And the result of this double function is going to be the result that gets sent back to us. But we've got to write the server side too. And it looks similar but different, okay. The server sits in a loop and we receive, we sit there and we wait for a message to come in. And the message has to be from. That's the from PID. That'll match the from PID and since it's a variable the PID will go into that. It has to have the word double in the second tuple and it has to have a number. And if it matches that we send back to the PID identified in from. We send back to it the answer that has our PID. This is the server PID now. The atom answer and two times n. So we double n and we send the message back to the client. So we send in the three, we double it and we send back an answer of six. And then we set the loop and we reverse. You may have a battery life, does that mean something? Yeah, that's probably. Yeah, it's got a splashing battery. I can't, I can't stand still and talk. This will give me enough, Heather. I'm fine. I'll just stay on this side of it. I won't walk in front of the screen anymore. Plus there's feedback right there anyways. Okay, there we go. We're about done. Because I'm getting tired. You guys getting tired. You know, I want to really want to thank you for coming back after a long day and a quick lunch and coming back and listening to me. I really, really appreciate that. I just want to tell you guys that. I can go longer. I can go longer. Mike's been waiting a year for this. He's going to get every single minute he can out of it. Mike, everybody will be gone and I'll still be talking to you. So we can see in Erlang it's very simple, a little, little strange syntax thing going on there. But it's actually pretty simple to set up a client on a server that we can send messages to and receive messages back and it happens asynchronously. And in Erlang we don't care whether the client is on the same machine, on the same virtual machine, on this as long as it's somewhere that we can get to on the same box, on the same network. This works across boxes. And when we call double we don't care where it sits. We don't care where the server is. We just want a double, that number being doubled and sent back to us. There's all kinds of patterns that are available in Erlang to handle things like this and we're just barely scratching the surface. But simple message passing primitives and simple recursion, simple pattern matching. Now processing, processes at Erlang are fast. And this is some numbers that came out of the programming Erlang book if I remember correctly. Three microseconds, CBU time, nine microseconds wall time to spin up gazillions of processes. This is a poor process spin up time, spinning like thousands and thousands and thousands of processes in an Erlang program. So processes are cheap. In Erlang, Erlang's not object oriented. In Erlang the processes are our building blocks rather than objects. And just like in Ruby we send messages to objects. In Erlang we send messages to processes. And I think there's some parallel in there, parallelism in there that I don't quite grok yet because I haven't used Erlang enough. But it's very, very interesting. Now remember what was bad about my example of a threaded program? What was what was the killer feature? Data actually can't really share modifiable data. Now in Erlang you have no modifiable data. I cannot reset a variable once it's been set. There is no way to modify data. There's no way to share data that can be modified between processes. No external locking required. It all happens automatically through the primitive send and receive operators built into Erlang. So Erlang totally avoids 99 percent of the problems involved in writing concurrent programs. Well maybe 90 percent because you still got to design the thing to work on processes. But it eliminates a lot of the problems. Again we see a small core. Adam's tuples lists and processes. We see simple rules, no shared data, and we see powerful abstractions. In this case it's messaging and pattern matching that we're using for powerful abstractions. Why is this important? Because of code like this. Okay quickly. How many languages are different notations can you identify in this chunk of code? This actual real-life enterprise chunk of code. I'll give you a second. I heard someone say four. Who says four? More? Who says more? What number? No, no. But more than four. Anyone mistake? Okay, at least five. Anybody higher than five or seven or eight? Seven or eight? Okay. Anybody higher than seven or eight? Alternative depends on how you count languages or limitations, right? So it's not going to be exact. But this is how I counted it. We see the div and the slash div and the br and the a tag. Okay so that's html. We identified that. The yellow stuff, the display equals, and a little later we see a none. That's css. The purplish-bluish stuff. That's tagline, which is a Java library that allows you to evaluate expressions nested inside of all this stuff. There's a lot of that sitting there. And I had someone say, but that's just xml, which is the same as html. I said yes, but it's improperly nested inside the other html. So it doesn't count. It's different. It's a different notation. I mean you cannot you cannot put that c column if right there in a regular with an html. It's not, it's a different language. Totally nested. In fact, this is bizarre. Start at the top. We have a single quote. Then we have a double quote. Go down the next line. We've got another single quote, but it's not a matching single quote. It's a different single quote. And follow down. Now we've got another double quote that doesn't match the original double quote we first had. Now we've got another double quote that does match. Now we've got a single quote that closes the first single quote. And then we have a double quote that matches the currently open double quote. And then we have a single quote that closes the original single quote. My gosh, I have no idea how to do this. I could not write a parser for this. I would have no idea how to write a parser for this. There's also java down here at the bottom. We request the get context path. That's java. And the percent angle brackets on there, that's not really a language, but it's part of the jsp notation. So I'm counting that as separate. And of course we got javascript embedded in there somewhere too. So yeah, I counted seven languages. I get it. Depends kind of how you count them. This is a mess. I'm kidding. This is a real page from a client I worked at, sanitized. Okay, so you can't tell the domain anymore, so I'm not going to put blame on them. But this is common in the java world. And I've been working on a Rails project. It's not that far off of a Rails project either sometimes. So, you know, complexity. Complexity. We are plagued with complexity. And all the software we write. I'm not going to bother to talk about C++ or java generics. Just imagine for yourself what they're like. So how do we come to this point? How do we get to the point where this complexity exists in the systems that we're writing? I think there's two basic reasons. Okay, we as programmers, like the quote from Frederick Brooks, we love to build our castles in the sky. Our castles built on air. And we love it. This is why we're in this job, is to do this kind of stuff. So if it's complicated, that's fun for us, right? This one took me a while to get. I think most people have got it now, but he's a little bit intent on solving a problem that he doesn't look around and see there's other solutions involved. I think we get that way sometimes. Which leads us back to William of Ockham. And he said, I don't know what he said. No, this is what he said. I don't read Latin, but in English he says entities should not be multiplied beyond necessity. Or if we put this in programmer terms, the simpler solution is the correct solution. And just about every case. So I love this quote. This is Tony Vore. He is a, he was actually in computer science before I was. He's the guy who invented the quick sort, or discovered the quick sort. And he worked on parsers and compilers. Actually in the very early days, you didn't build one of the very first algal compilers. And he said this, there are two ways of constructing a software design. One way is to make it so simple there are obviously no deficiencies. And the other way is to make it so complicated there are no obvious deficiencies. The first is much, much harder than the second. So why is simplicity hard? We live in a society that glamorizes complexity. I was reading a rant the other day. It was one of your typical Java versus Rails type rants. And he says, Ruby will never become mainstream because I have written a thousand, a hundred thousand line Java program. And Ruby could never handle that. Well, first of all, Ruby would never be neat. You would never need a hundred thousand lines of code of Ruby to do the same thing. But complexity is what they're thriving on. They love complexity. And we live in a society that counts lines of code as a productivity measure. How crazy is that? So when I go in and refactor the thing down, so it's half the number of lines of code, it might be negatively productive. That don't make sense. Okay, don't get smug. So we live in a society that values complexity. The second issue is that we as programmers love to abstract our solutions. Somehow I don't think the world needs a food slash data slash word processor. And yet we write a generic parser that will parse everything in the world or we'll write a script that will handle Java. You've got examples of your own that you've come up with, that you've taken things and it solves a problem you have right now. But you know if you just add this one little feature it can solve other problems that you don't have and you add it anyways. I think there's some promising trends that I've seen here at this conference and most of them were mentioned here actually. Camping, a simple solution to think if you don't need the complexity of Rails, don't use it. Camping is a great idea. We talked about MIRB and how it solves problems that Rails doesn't necessarily address as well. It solves it better. Now it doesn't solve problems that Rails solves better. So this trade-offs pick the kind of solution that's going to work. And Rubinius, I am really excited about Rubinius. I haven't had a chance to dive in and use it yet. But the philosophy behind it, the idea of everything is ruby and keep it simple all the way down and consistent all the way down. Remember, small core, simple rules, powerful abstractions and I see that in Rubinius all the time. So my message for you, the most important thing I have to say to you tonight, is this. Simplify what you're doing in your code. Don't get so head down in solving this problem that you don't see the simpler solutions around you. And the keys to simplifying is to identify that small core of functionality that really solves your problem, develop the simple rules for that small core and use powerful abstractions to get there. This is really the end. The only reason I wanted to give this talk is so I could use the cartoons from the book Thinking Forth by Leo Brody, which is now in Creative Commons right now. So you can actually go out and grab this book and read it. The cartoons are great. It's really out of the box. It's about Forth, but it's about thinking outside the box and stress simplicity. So, you know, it's not a bad book to pick up to read for those kinds of ideas. Thank you very much. My message was so simple. There are no questions. I know, mine too. And I'm going to say it again. I really thank you guys for being out here for so late. This is a brain draining on me and I have the adrenaline going on my behalf. So I thank you for staying awake and listening to everything. Do you have a question up there? I don't have a question but a comment about the variables in LAM. If you think about them as mathematical variables in the equation, they don't change values there. So if you want to change your talk and not take part of the other developers but get the point, you could change that, but you could make it. You know, that's a good point because when I first started working Fortran way, way, way, way back before you were born, saying x equals x plus 1 just blew my mind. Yes. It's fair to say that the variable does change once from undefined state to whatever its value is. You were moving like that and as you can move like that. Okay. Well, grab that one. I'm not going to, I'm going to, I guess. One more comment on your example. The 20 language was there was actually the Paul's letter on the CSS. Was it the single parts of the CSS? So the whole side went into your code long. Oh, right. Oh, yeah. The display code long, that was just playing. You're right. I didn't generalize it a bit, so I may have been a type, but we're not going to blame them. It might be my fault, but yes, this is real. Pat. Just a quick comment. It is probably fair to say that an error in variables are, but we didn't have to realize that in Ruby constants are. Ruby constants are. That's his point. Yes, that's very true. Anything else? Yes. In terms of simplification, where do you lean towards like metaprogramming to make things drier versus magic when metaprogramming gets. I personally love metaprogramming and that is, and I will confess it lists. I love lists. What was what was my second language? Okay, after portrait. I love it, but in terms of simplicity, you know, I will write metaprogramming after we play with it and do it, but in a real program, when I want to keep it simple, I say, does this solve a problem I'm having? I'm having and if metaprogramming solves a problem and makes the entire solution simpler, then it is the right answer. If you're doing a whole bunch of metaprogramming to remove one line of code and I usually say Joe, that's not a good idea. Truthfully, I do that too. So that's not just a thing on Joe, but he's, you got a pair of program with this guy sometime. It's fun. Time for one more. One more comment. Yes. Are you coming to the address? You know what, I think I'm going to grab some food and I'm going to come up to the address. I'm going to school by all the code.