 How's everyone doing? I figured you were all partying late last night, so I thought I'd make this easy. I reduced my slides down. I have four slides. Static types. We can't use them well. We don't usually need them, and they can harm more than they help. Thank you. I didn't think you would fall for that. So, by way of illustration, let's make a delicious and quick American snack. Take the bread out of the bread box. Don't do this. Supposedly this was because I don't have a good battery, but I believe I do. Okay, we'll do this again. Take the bread out of the bread box. Oh, take the peanut butter and the jelly out of the refrigerator. All right, I'm not going to be tied to this podium. Let's see. Spread the peanut butter on the one slice of the toast. Spread the jam on the other side of the toast. And put the two together. You have a peanut butter and jelly sandwich. Now, this is a recipe and a set of directions that almost any five-year-old could follow. But since we're talking about static types, LTR is what's coming for you. LTR is what smart-ass math and computer science professors put at the bottom of really hard problems and even harder problem. So, if someone wants to come up and give me a purely functional peanut butter and jelly sandwich implementation with extra credit for static types, we can do that. So, you might be thinking, what does making a peanut butter and jelly sandwich have to do with programming? I don't know when my computer changed to Spanish, but my talk has a hundred percent more meme slides. Oh, that's not working either. All right. So, what we looked at is very underspecified, right? I didn't say where to put the bread. I didn't say where to put the peanut butter and jelly. What if we had put the bread on the floor, right? This is not typically something that recipes ever have to address, but it is underspecified when you think about what we have to tell a computer to do. It's also imperative. I told you each step. You can't go and rearrange like putting the pieces of bread together and then putting the peanut butter and jelly on them. It's also repetitive. We didn't say, let A equal bread, take A from there, put this on A, right? It's not working either. It's also not abstract. I know someone in the audience is thinking, what if I wanted not just a peanut butter and jelly sandwich, but ham and Swiss, right? So, we need to abstract this. Okay. So, this is double advancing, which is not going to work. I think it was because the computer had two brains there for a second. Okay. So, actually, not really joking here, right? So, we have this imperative thing. We hear a lot of stuff about how Ruby is imperative, and there's this whole other domain over here where we have functional languages, but could someone actually offer me a way to describe making a peanut butter and jelly sandwich in a functional manner? It's a difficult thing to even think about. So, what does that even mean? So, let's talk about this thing called programming for a moment. In the industry of programming, we tend to make poorly designed programming languages that make unreasonably broad assertions about their suitability. We say that they're a general purpose, and then we use these, and it requires a tremendous effort and cost to change them, sometimes just changing from one version of a single program to another. Then we build them and we fail a lot. I was doing some research, 70% of projects reported in this one study had had a failure in the previous 12 months, 70%. Another study looked, and there was 17% of companies said that they had a failure that could actually threaten the livelihood of the company itself. And it's not just because we're not using the right processes. There was another study. This was actually in Dr. Dobb's journal. 73% were using agile reporting project success, compared to like 67% using more traditional, which is something probably closer to waterfall. So even this, you know, agile process is not giving us significantly more success over something that we hear at almost every conference, how waterfall is not the way to be doing software. This is where it gets really interesting. We typically don't learn from failure, and this is not just programmers. There was a study of doctors who do coronary artery bypass surgery, right? And what they found was that doctors whose patients died got worse over time. They didn't get better. They also found that doctors who watched other doctors performing these operations successfully did not get better over time. There was one group of doctors who got better over time. They were the doctors who watched other physicians perform the operation and have some failure, some problem. What the researchers learned from this, they even have a name for it. It's called attribution theory. What they learned from this is that we tend to view ourselves, things that happen and things that we do in a light that is most favorable to ourselves. So for the doctors whose patients died, it was because the patient was unstable. They were rushed. The handoff to ICU didn't go well. The doctors blamed things outside of them, so they had no reason to improve. The doctors who watched other people fail attributed that failure to the doctors, not to the external events. And so they said, hmm, I don't want to make that failure. I better learn something from this. That's really interesting in a contemplative way. We learn from others' failures, not necessarily our own. Now there's a way that we can actually combat that. If we do not allow that part of ourselves that gets defensive when someone suggests that we're not doing something right, if we can step back from that ego for a moment and listen and look at what someone is saying we're not doing well, we can actually learn from it. So experts in many fields have that ability. They've developed the ability to step back, evaluate the feedback. Maybe it's good, maybe it's not. I think that programming is actually a behavioral science. I know we hear a lot about math, but what we're talking about are people communicating and trying to tell either a computer to do something or communicate with someone else about how they're telling the computer to do something. And this is not something that math is just perfect for. So if it's a behavioral science, then we have things to think about like perception. How do the things that we see affect what we're actually thinking? And interestingly, do we even see everything that we think we do? So this is a very interesting study that was performed back in the 1970s. It's a grainy video of two teams, three people on each team. One team is wearing white t-shirts and one team is wearing black t-shirts. And each team is passing a basketball between their team members. So they're dribbling and passing. The black team is passing to other black members and the white shirts are passing to other other members of the white team. Now what's really interesting about this is this. What's different about this? Halfway through the video, a woman carrying a red umbrella walks through the middle of the two of the activity, stops for a minute and then continues walking offstage. When this is originally conducted, almost eight out of ten people did not see the woman. They were told to focus on a task, count how many times the basketball was passed by the white team. And so they were focusing on the white team members and eight out of ten people didn't see this. Two out of ten people saw this woman walk through the middle of the video. If that doesn't give you a pause for a moment to think about the things that we see every day and may not even see them at all, even though they're right in front of us. But when we're programming, we're not watching videos, right? So what's that got to do with programming? We are thinking when we're writing programs, I hope. So let's do a little test. Say someone has given you a slide or a, say someone has a deck of cards and they've laid out four cards and they tell you that there is a single rule about this card deck. There are numbers on one side and there are colors on the other side. So single-digit numbers, which are even and odd. And on the other side, there's either brown or red. How many cards do you need to flip over to prove or disprove the rule that in this deck even numbers have a red back? So even numbers have a red back. How many people think you need to just turn over the three? The red card? The eight? The brown? Okay. So however many people here, like four people raised their hand, how many people have no idea? So this is a simple logic problem. If the number is even, then the back is red. Is that equivalent to if the back is red, then the number is even? How about if it's not even, then it's not red? How about if it's not red, then it's not even? So in order to do this, we actually have to turn over two cards. If we turn over the eight and we see red, we've only validated the rule, but brown is not red. So if we turn over brown, right, this one, and we see not even, then it's correct, otherwise we've disproved that rule. When this study was conducted, 90% of people got it wrong. Now be honest, raise your hand if you knew the right answer. Take a look around. Keep your hands up if you knew the right answer. Of course you did. This is the guy who gave the talk about types. Propositions as types. This is what we learned about yesterday, the Curry-Howard correspondence. Propositions as types. Logic as types. We can't use logic very well. One does not simply use static types. But wait, we're programmers, right? We can learn this. We've been here before. We didn't know what we were doing when we started out. Turns out there's another study done of professional trained statisticians looking at whether or not with all their training and all the things that they did actually understand if they had a better intuitive sense of statistical relationships. Any guesses? Turns out not so much. As humans we're really bad at this. There's a book Thinking Fast and Slow that I recommend that you check out. There's actually a whole section on my slides that is references. There's about eight or nine different books that I think are really, really interesting. So professional programmers. This guy, Connor McBride, very, very interesting paper. We heard about Idris yesterday. Idris uses dependent types. It's a fascinating field of study. He wrote this paper, How to Keep Your Neighbors in Order, looking at how to create a correct by construction binary tree. This is isomorphic to a total ordering on an array, meaning that if it's correct by construction, you can't insert an element into the array out of order. You can't insert into the tree out of order. This paper, he takes us through different ways that he attempted to implement this. This is a professional researcher in dependent types. And it took him a really long time to make one algorithm. I'm not saying types aren't good. I'm just questioning whether or not they are the best thing for us. This idea of correct by construction also requires that we actually know what we're doing because math isn't right. Math is simply a tool. You can start with a wrong premise and with perfect deduction reach a perfectly logical conclusion that's completely wrong. So correct by construction is not some silver bullet that is just gonna save us when we start writing programs. So this is why I say that we can't use types well. This is one of my arguments for not relying on types or seeing them as something that's gonna save us. In thinking about this though and in thinking about all the different conversations I've had for a long time with different people about different sorts of programming languages and especially about types because Ruby gets a lot of shit about being dynamically typed. It's my favorite part of the language. So I started thinking about this and I came up with what I think are eight different areas that we misunderstand what we're doing with programming. And this is sort of inspired by the fallacies of distributed programming which I've also been like studying and thinking about recently. So this is where this kind of comes from. The first one that I see happen all the time is this idea that scale is the same. We love to look at a problem and say oh I did this right once so if I do it right n times I can solve basically any problem. So if I need one mailbox I just keep adding as many mailboxes. If I need to ship one container I just keep adding containers. So the idea of scale is not that we think everything is the same size right. It's the idea behind will it scale. If I do one then can I do n successfully. And I don't think that all problems can actually be approached like this. There's this idea of a qualitative difference versus a quantitative difference. A qualitative difference means that two things maybe not may not be comparable qualitatively. By the way none of these things are particularly I'm not going to be able to prove any of these things. These are things that come from experience that I see as making mistakes when we reason about different systems based on these fallacies. So this this idea of scale everything's at the same scale. In code this is like how many lines of code do we have. We don't ask questions like what's the biggest program you can write in Ruby or in Haskell. But we do often refer to programs as you know 10,000 lines of code a million lines of code but we don't necessarily see that there's any divisions between these. A colleague of mine Chad Slaughter has talked to me about this idea that at some point around 2,000 lines of code there's a barrier. He's taught people to program and they they just start adding things to a script until it gets to about 2,000 lines of code and then they hit a wall. They can't go any further unless they can pull a different level of abstraction out. At that point they need to understand say how polymorphism works so they can start using classes effectively. He thinks that there's another you know of that qualitative difference somewhere around 20,000 lines of code 2 million lines of code maybe 20 million lines of code places where people lose the ability to actually keep all the concepts in their mind and they fail to make progress on the problem anymore. We also have this idea kind of that everything is about the same risk. When we go down to the grocery store to pick up say bread and stuff to make a peanut butter and jelly sandwich we don't suit up in our racing suit. It's very possible that you could get killed in the car accident just driving to the store but we don't actually perceive that risk so we don't you know we don't treat it the same as getting on a track going 200 miles an hour right. So the idea that there's different risks in the type of program that we're doing. If you are launching rockets I hope you're not using Ruby. You might not even want to use Haskell if you're launching rockets. It's really really important to be right and to have timing and all that stuff down but on a web server if you're sending a form to somebody that's not critical does it have to be perfect? What's the risk if it fails? Related to this idea of scale there's also this idea of cost. We just do this little formula right. We t-shirt size thing so that's about a medium that's a large that's going to take six points our velocity is whatever we assume when we do that that everything is about the same if we if we average it all out we have some n times the number of lines of code that's how much money it's going to cost. There's nowhere in the code where we can see something that says well by the way this section right here this thing cost a million dollars to make. We have this idea that things are all about the same granularity in fact we say things like everything is an object or everything is a function when you have food that's all about the same consistency or granularity you have something like mashed potatoes or puree or baby food mush right we need texture and you know chewy and crunchy and these sort of things everything the same ends up being very difficult for us to work with. Imagine if I said this is a book where everything is a word right we have paragraphs sentences sections chapters we don't have everything at the same level everything in ruby is an object by the way where's the call stack how do I put my hands on that where's the scope then we have this idea of the same abstraction and man it's hard to find a picture for an abstraction so I don't have a picture the idea of the same abstraction is that what we're dealing with can be dealt with with the same sorts of things we have classes and methods so we make more and more classes and methods because that's all we really have to deal with our problem so we sort of see things as if they're just a bunch of classes with methods and then this idea that things are just sort of the same like time is really maybe it's a parameter but in Newtonian mechanics right you can just take t and run it forwards and run it backwards and the system will just you know go just fine t doesn't really matter when we talk about compiling we talk about doing this thing that's at a point in time I know when I compile my program that it'll run as if that point in time is the most important thing I'm not saying it might not be the most important but it's a question at the moment that I compile my program does it actually know everything that it needs to function correctly I compile it ship it off it's like we're stuck in the shrink wrap a shrink wrapped box era of building software got a package it all up there's a big cost to cutting all those CDs putting them in the packages putting them on trucks sending them to the stores everything has to be right right so when I compile I want my program to run but we have the cloud now I could deploy between the time that you load one page and hit fill out your form and hit next you could be running on a different server that I've just sent out there I might be able to get on that machine and change something and make your page refresh right we it's they're computers we can do all kinds of stuff but we pick this thing this point in time and we say when I compile it's supposed to work and this is the biggest idea the idea that somehow these systems all have the same sort of order here order as opposed to disorder once upon a time science thought that all phenomenon could be explained sort of with Newtonian mechanics which as I mentioned t is a parameter you put in t as a parameter and you add you know one to t and you see the system evolve this way and you subtract one and the system goes back this way turns out the universe doesn't work like that everyone know what this is it's the index of all Julia sets it's an amazing thing the manobrot set it has tremendous detail and and complexity where you're seeing here is a colorizing of how quickly a point either diverges to infinity or where it's black it doesn't diverge so things that are black are in the set everything on the boundary this fractal boundary diverge at different rates the rates at which they diverge yields this tremendously complex and I think beautiful illustration of how amazing our world is it comes from this simple equation take a point which is c a complex number start at zero with z equal to zero z plus z squared so zero squared plus c becomes z and you just keep doing this if that converges it's in the set if it diverges the rate at which it diverges is what's used to colorize that picture the study of chaos started sometime middle last last century and there's a there's a fascinating book that's almost like 30 years old now called chaos highly recommend it in the study of of of chaos they found that systems do not necessarily have simple order there's this idea that things can be sensitive to their initial conditions we can't just put t in at zero shift the system over here right with the same inputs and have it have the same output the same behavior it turns out that it can be very sensitive to the initial conditions you change them just slightly and instead of being about the same over time it wildly diverges this is called the butterfly effect the idea that a butterfly flapping its wings here in barcelona could cause a hurricane in new york city but this is not necessarily random these are deterministic things this illustration blew my mind when i first saw it these points are being filled in at random but in the end they showed this picture so if this is a system that we're interacting with where we need to interact with it over time at this point we don't know much about the system at this point we can see it very clearly if we have to interact with the system over time in order to even know what it is is compiling once and then shipping it off necessarily going to work what if this is describing a process not just what that thing is solving but the entire process so the really interesting thing about chaos theory the study of chaos theory is they started looking at how they started looking at processes instead of state so becoming instead of just being this idea of order i think is very very important there's this thing called the kinefin framework it's in my references that looks at these different types of order and i think the different types of programs that we write need to understand the different types of order that we're dealing with we can't just assume that everything is newtonian dynamics and things go backward and forward just just as easily this is called a Lorentz attractor and it's another illustration of a dynamic system in this case almost periodic almost periodic you can see that it looks like it has a periodic behavior it looks like it goes around and around but it doesn't it never repeats also you can see that there's sort of two loci that these things are going around it flips between these states and the point at which it flips is not necessarily deterministic it can be very close nope not going to go into that one it's going to go into that so this is where i say that i think types can do more harm than help if we mistake that we're dealing with a simple system when we're actually dealing with a complex or chaotic system we can have really serious failure in the kinefin framework they talk about the fact that catastrophic system failure occurs on the boundaries between simple and complex systems because you think you're dealing with a simple system you turn one little mob here and the whole system falls down and the final fallacy that i see is that programming languages are general purpose the idea that we don't really need to tell you what this is good for you can kind of do anything that you want i think this is actually a terrible idea for many reasons you know that i've just explained in the in the one before for order all right so we've got two or three one left nil nil the most hated thing in ruby nil no method error nil i hate nil if only we could not have nil i wish i had options you know that every single method can return nil yeah it could every single method so you've got an option type on every single method option something or nil what is nil though it's an object it's not nil it's not that dude's billion-dollar mistake it's an object empty string empty hash zero we even have to see now in functional programming we love this thing this is an essential part of a category an identity function give the function x and you get x back compose some functions compose them with nil you should get nil back nil is the thing that turns any function into an identity function for nil let me fix this problem you have with nil not trolling it's all you need if nil bubbles all the way through your system and ends up as a blank string in your template you can have some logic in your template but you don't need to worry about it everywhere else ruby's got a lot of problems this inability to operate with nil is one of them but it's this easy to fix nil's just something that if i give it you give it back to me we don't need static types to deal with nil and no method errors in ruby there is an idea though that often goes with static types functions do we have functions in ruby yes we do you saw them they require a lot of boilerplate you got to make this fake module not fake but you got to make this module and you got to have this extra thing that does some stuff makes them private but all you really want is just a function so we should just have functions gary Bernhardt he's got this company called distrial software he has this really great talk called boundaries and a screencast called functional core imperative shell and the point that he makes is that a lot of the problems that we deal with have these very well defined sort of functional parts where we operate on values we don't mutate a bunch of state and then around that where we interact with the world we have some objects and objects have to deal with the messiness of the world which it turns out is a really good thing this is a fantastic paper the power of interoperability why objects are inevitable it basically says that if you don't have if your language doesn't have objects you'll create them and the reason that you'll create them is because objects are fantastic for dealing with interoperability that part of there can change i don't have to we can still communicate the thing about objects that's really important is the messages messages are the core of objects not types not even methods necessarily because when you say you know cat dot meow that meow could come from a modula included a super class it can come from a singleton method on that thing it could come from method missing might be nil but the basic idea is that you're sending a message and that message is the important thing so if we have functions and then we have objects i think that we'll have a much more powerful system and if we have functions here's where you can put your types in a place where we can understand something very well not at the place where we need a lot of interoperability but something really well defined and we can use these we can we can basically do this in the existing vm rubinius we can add instructions that can deal with these types on functions while the rest of ruby the objects can continue being useful for interoperability functions are great for things like cli's a lot of people use go now to write command line interfaces they're also fantastic for parsing which is one of the basic things that we do parsing is really really well understood so we could write json parsers and yaml parsers xml parsers if we had functions we could make http parsers we could possibly even make garbage collectors which if you know that rubinius has this sort of idea of ruby written in ruby if we didn't have to write the vm and c or c plus plus but we could use functions in ruby to do that then we really could have ruby in ruby all the way down so this idea of general purpose i think this is the worst idea that we have in programming and what i'd like to do is propose that we fix that the way that we fix that is by dealing with languages in many different forms for the specific task that we're trying to accomplish what if a grammar was a first class thing for our system parsing expression grammars in particular have this fantastic quality called composition which we all love from functional programming languages so if i have a grammar in my language to parse a date and then you want to make some sub language so you want to make a yaml parser and parse dates out of it you can just compose date from the language and the rest of your grammar and you can build up different levels of abstraction with different languages so i haven't talked much about rubinius rubinius is the project that i work on it's an implementation of ruby that is really really easy to test out if you're using mri add this to your gem file for things like ruby debug that use internals to mri run bundle update and then your app should boot that's what we expect if it doesn't let us know that's all you need to do to try it out we have two one compatibility uh converging on that keyword arguments are on master they'll be released hopefully next week we have one eight seven compatibility on a separate line for all those apps that are still hanging around because it's too hard to migrate them or maybe it costs too much or any other reason um i work at inova thanks to inova for making it possible for me to be here today um there are like a bunch of apps that we're migrating rubinius works it's in production your app is different than the apps we're working on so your app might not work but if it doesn't we'll fix it so it's my hope that you know in the near future i'll have 50 or 100 more people contributing to rubinius but the point i want to make here is that even if we had a hundred developers that could work on rubinius that's a fraction of everybody here i really really strongly believe this open source is not a spectator sport the rubinius core the bytecode compiler and a whole bunch of tools that we're working on now are all written in ruby and you're all rubious you're all ruby programmers you can go into array and fix it if we've got a problem in partition it's not written in c or java it's written in ruby and i mean let's be serious who here would rather be writing java than ruby no one here wants to write java but businesses don't care about that they care if they're making revenue they don't care if you're happy that's a harsh truth but it's serious so if we want to use ruby we have to make sure that we're making ruby as good as anything else because i don't want this guy telling me what i have to do oracle wants you to use java the same way that microsoft wants to use to use u to use dot net or c sharp or apple wants to use swift or objective c they control it they have an interest in it your happiness is not the most important thing to them oracle even published this paper one vm to rule them all so closing ruby's fun so don't worry have fun but please think about a couple of these things the first is don't worry about people stealing your ideas if they're any good you'll have to cram them down their throats so i'm not worried that we're going to get static types in ruby anytime soon sometimes the questions are complicated and the answers are actually very simple just let nil be returned and then we don't have no method errors on nil anymore you never change things by fighting the existing reality to change something you have to create a new model that just replaces it and no x i put corporation in here no government no army no corporation can stop an idea whose time has come i want this idea to be that we can make ruby the system that we can use effectively and not suffer from these fallacies of programming that i've talked about and finally this one this is a really hard lesson to learn it's only taken me about eight years complain about the way other people make software by making software please follow me on twitter lots of really really cool stuff coming rabbinius very soon and thank you