 But Rich Kilmer there was also one of the co-founders of Ruby Central, along with David Allen Black and Dave Thomas, who aren't here, just wanted to include that for completeness. I wanted to show you this site real quick. This is life is too short for meetings.com, just one second. I made that site just as a reminder. We're getting old. I haven't been here in like six years, I think. And by here, I mean RubyConf, because RubyConf is just a location that floats around through space. And it's true. I haven't ever keynoted it. In fact, the only talk I ever gave at RubyConf was in 2005 in San Diego, was anybody there? Matt's Rich, Jeremy, yeah, Eric. And it was a, Takahashi. It was a continuations tutorial with Jim Wyrick, who has since left us. Because I haven't been here for a while, it's the first time I'm coming here, and Jim's not here. So I have this lazy way of doing talks. Matt's even retweeted this for me, thank you. Coming back to the Ruby community after a while, I just wanted to see what did people think, even outside the Ruby community, that we needed to be aware of. I'm not going to put the answers up, because it's not really worth it, there were a lot of answers to this. It was useful. I'm not saying the answers weren't good. But what I saw was a lot of stuff that sort of verified that Ruby is getting old. It was stuff like some of the libraries are getting old and decrepit and don't work very well or the documentation is out of sync with the code. Things are slow. It's not all about Rails, that was a pretty constant theme. And then there were a whole bunch of feature requests for the language that we've been hearing since 1999. I mean a lot of them. It used to be a thing when we started with Ruby as English speakers on the Ruby Lang mailing list, pretty much every new person that joined would request the same set of features to make it be more like Java. Now I think they all want it to be like TypeScript. Here we are. I wrote this blog series back in 2006 after what I thought was a pretty full career already of being what I like to call a systems euthanizer. So I just go from euthanasia for those who are speaking English as a second language means killing something for mercy because the thing needs to die. And so system after system I have been part of projects to painfully kill those systems. They're sick. They're slow. They're legacy systems. They're dying. They're unhealthy. And so we painfully put them out of their misery. And in the process of doing it, we usually create new systems of course. We don't just kill it and leave a dead carcass of a system behind. I realize though because I work too hard when I work. I put so much of myself into this work. As I'm killing these systems, I realize that I'm also crushing the last remnants of the hopes and dreams of the developers that created the systems that I'm killing. So it's a pretty miserable job really either both the job of a euthanizer and the job of someone who creates systems because they tend to die so frequently. If you don't know this, this is sort of a non sequitur, but you'll see that it kind of matters. I came to the software world originally as a musician. This is me doing a recent jazz show in Little Rock. And it changed the way I thought about how software development should be done. And really just how I assumed it would be done. And there was one major thing that stuck out. It is this word. I said it a minute ago. When I said legacy systems, how many of you thought, oh yes, legacy systems. I love legacy systems. Thank you. One person. We've created this negative feeling around this word, but I'm a musician. I still think of myself as a musician who just happens to be programming because it pays better. And as a musician, I came in with a different assumption of what legacy meant. So people would say legacy, and I would think, okay, something that someone leaves behind like an inheritance, a bequest, a gift from a past generation. Legacy is a really nice thing when you think about it from that perspective. But in our world here, we think of it as a negative thing. It's a nasty old messy pile of junk that is left behind by someone else whose name we usually curse in our everyday work. And that's kind of strange, you know? As a musician, like right now we're listening to the second movement of Beethoven's seventh symphony. This is a picture of Beethoven. Beethoven left behind such a legacy of his music that even we Americans can pronounce his name properly. Beethoven, by the way, this piece, seventh symphony, everyone knows the fifth. This is the best one. Go listen to it. I'm a saxophone player. This is John Coltrane. How many of you have heard of John Coltrane? Most people have heard of John Coltrane. Every saxophone player obsessively tries to imitate John Coltrane, even today. Even like 15-year-old kids in New York City are playing and you can hear Coltrane in their sound. And jazz is irrelevant at this point. And still they're doing it because he left behind this legacy. The same is true in the world of literature. One of my heroes, Kurt Vonnegut, for example, left behind words that I have read, even after he died, that have shaped the way I think. And probably the way that I'm speaking to you now are heavily influenced by having read Kurt Vonnegut's work. Architecture, same thing. This is Gaudi. Walk through Barcelona and you see his legacy and his imprint on the city. Even the world of fashion. This is a piece by the Japanese group, Tom de Garcones, which is my favorite fashion house. I'm not sure what your favorite is, but mine is Tom de Garcones. Not just because I want to impress months, but they are Japanese. But they create these pieces that not only will live theoretically forever if they can be preserved, but change the course of fashion. They have rippling effects. Same thing with art. This is Juan Miro. I like Juan Miro so much that I have basically matching tattoos on my arm for this painting. Juan Miro left behind a massive body of work that we can all still enjoy, still be influenced by, and also pioneered the idea of visual poetry. So he actually created, most people don't realize this, but he created a visual language of symbols and his paintings at certain periods in his career were poems, so this is a poem. But we have a problem in the software world. Like I said, we use this word legacy negatively. We also have a really hard time shipping software. So you're not going to be able to read this, but you'll get the gist of it. This is some data that is kind of questionable by a group called the Standish Chaos Group, and I say it's kind of questionable, meaning you might challenge the scientific merits. However, you will probably agree with the directionality of the data. So what they do is they go around and they inventory projects. They survey projects across the world, and they categorize them into successes and failures. And what we have here is the green row at the bottom, the bottom most part, is successful projects. The top, the red are failed projects that just got completely scrapped. And then in the middle are challenged, and for challenged, they call that significantly over time or over budget, which to me is also a failure. So look at how bad we are at making software. And this is just for creating software in the first place. Once we create it, I made up the statistic, but I think it's kind of right. I travel around the world and I kill these things, and it seems like they only live about five years in business systems especially. So unfortunately, software when it is initially birth, like most things that are creative, is not immediately perfect. So if you create something, it may live for five years on average. Well, it may take longer than that to even have it be good. So this is a bad situation. Software might only live five years, and it takes 10 years for it to even get good. It takes that long to iterate it and actually meet the user's needs and perfect the UX and make it perform. So we have a problem of software. How do we turn this around? Well, this has become my quest. Some of you, very few I would guess, have seen me do a talk like this before. This has different slides and different content, but I've been talking about this for the last seven years or so. And much of my work has been really influenced by the quest, figure out how to create software that doesn't have to be killed by people like me. So I initially went to what I thought was a perfect source of information, which is Mike Feathers. Mike Feathers wrote this book, and if you work in legacy code, which you all do in the bad definition of it, you all work in code that already exists that needs to be modified, you should read this book. You'll have to hold your nose and read through the C++ and Java stuff, but it's worth it. So I asked Mike, I actually went on a video chat with him a few years ago, and I said, how do we create legacy code? And I was incredibly disappointed by his answer. He said, well, let's see. People need to be afraid of it. It needs to be not dynamic, basically, in stasis and very difficult to change. So I don't think he understood the question, want to ask him this. But I'm pretty sure we could all do this. So I turned to Twitter as I like to do, and I did this again recently. I got very similar answers, but what are the characteristics that make software that we still use that's old still use? And I also got the same negative by it because people think of old software as being bad. Fear of awesome, sunk cost fallacy, old still in use manager. That was one of my favorites. But I got some good ones too. It works, it's stable, it's valuable, it provides value to users, et cetera. So I was sort of encouraged by that, and I got a lot of things that got me thinking. But back to around the time that I wrote that big rewrite article series, which is sort of relevant timing-wise, there was this post by DHH on the Basecamp blog, or the Signal vs. Noise blog from 37 Signals. And it was joking that enterprise is going to be the new legacy. And he really ran this propaganda campaign against the word enterprise, if any of you were doing rails or watching back then. We went from enterprise being a good word that was often used for negative purposes to being something that you would chuckle about and make fun of. But the sad thing about this is David's way of panning the word enterprise and making us all hate it was comparing it to legacy and saying it's going to be like legacy, which is obviously bad. So I found this comment to be really well written. I'll read it to you. Careful, legacy isn't a bad word. Legacy usually means tried, true, and of enough value that it lasted long enough to be old and outdated. True, right? To mock legacy is to look at the successes of the past and to declare that they aren't to be revered or respected. Most of what runs our economies is like a signal. In the future, I hope the software I'm creating now was highly regarded enough that it's still around and being referred to as legacy. I thought that was really inspiring as I typed those words into the comment box. Unfortunately though, the software I was creating at that time was not highly regarded enough. In fact, it was one of those red boxes on the Standish Chaos Report. And it was a year of my life, end of a colleague's life, that I swear we still have PTSD over. And I mean that literally. It's one of the only periods that I regret in my career. And it was dismal. I still look at it as like, I died a year early in 2005 and I can't get it back. Miserable. Before that, on a more positive note, toward the beginning of my career, I worked for General Electric, which at GE Appliances, we had this system that was running on a Honeywell Bull mainframe. Who has a Honeywell Bull mainframe? Nobody has one anymore, it's amazing. The funny thing about this is, I think it was 25 years old at the time. It had a custom TCP IP stack, it had a custom RDBMS. We had created this RPC mechanism that we could use to talk to it. It was terrifying. The hardware was so old that you could not get replacement parts from the manufacturer because it wasn't being manufactured anymore. So we had a team of people who knew how to manufacture hardware replacement parts with Honeywell Bull. But we could not get rid of it because what it did was both so valuable and performed so well that every attempt to replace it failed because the users would reject it. There was also just too much information in this thing. But I left that unexpectedly, I think, because when I came in, I thought, well, this is gonna be terrible. But I left that job feeling a great deal of respect for what the Honeywell team had created at GE. There was even a team called the Bull Exit Team that had been in existence at that point longer than my software career had been. And when I left, still existed. And then eventually was disbanded. And I have heard recently that this thing's still running. It's amazing and terrifying, but amazing. So I had that influence. At that time, I think the system was my age, actually. I was 25 or so when I started it. Speaking of which, I'm 43 now and I don't deserve to be alive because I take really bad care of the system. Refactoring and maintenance, and you may have seen me before, I've looked a lot better in the past and it also looked worse, but how do we create systems that survive? Well, the first step, like the Standish thing, the system has to be born. When Twitter first got launched, did you know that it was a database-backed web app written in Rails? Everyone knew it at the time in our community, but a lot of people don't realize that now. And if you tell them that, you think, well, that's ridiculous. A database-backed web app or Twitter? Because Twitter is a messaging system. It's this massive, asynchronous parallel thing. It has all these crazy optimizations because you've got Justin Bieber with millions of fans. You've got Trump. You've got all these different things that require high volume to do this in a database. I mean, they really type like Rails new Twitter or whatever, it was probably script slash generate Rails Twitter, but it's not actually stupid. Like we used to see these failures all the time and for a while we would make fun of it. There was the fail whale you would see. But if they hadn't created it like this, we wouldn't have Twitter today because it never would have amounted to anything. Because they'd still, they would have run out of money while trying to build the distributed message queuing system that Twitter needed to be. And thank God they didn't realize that's what it was going to need to be. They just made this micro-blogging system in Rails. So they got it to be born had they not, we wouldn't care about the legacy of software by Twitter. And it's harder to get software born than we might believe. I'm going backward. You can't even go backward because when you get older you can't feel the keys like you used to as you press the wrong ones. So the conversation with Mike Feather was wasn't all doom and gloom. He gave me a link to this thing at the top. I put a tiny URL link, I don't know why, but it's tiny URL code alive. An article he wrote in, I think also 2006 where he says that your software is alive and he starts thinking in terms of this biological metaphor for software which is really interesting for someone who's thinking heavily about how to create systems that can live a long time and live healthier lives. And what he says here in the article is it all comes down to one thing. Code survives by providing value and being difficult to replace. So the value has to be greater than the difficulty. This is an interesting point. The biological metaphor is really what got me interested when I talked to him about this. Your code is alive. And he sent me looking at this paper by Dick Gabriel, Richard P. Gabriel. If you're not aware of Dick's work, he is I believe the first person to have written about software in terms of patterns. So taken the Christopher Alexander patterns ideas and applied them to software. He wrote the common list object system. He did a bunch of really influential important stuff. And not enough people know his work. He wrote this paper that you can find. And I have the URL there. ULS Gabriel is what that says on the tiny URL link. Designed beyond human abilities. Which is a pretty enticing title anyway. And basically what he's doing is he's looking at make-believe systems to see what would you do if some crazy parameters were present? So for example, trillions of lines of code. How would you maintain a system that looked like that? Which is not even likely to ever happen unless it were general reading. But what really got me interested is he ends up turning to this biological metaphor for systems. And he says, biological systems are very much larger than anything coherent people have built. Not just in software, but in general. There are a bunch of interesting things that come out of this. But as we talk about complexity in systems, if you look at biology, the human body, for example, you can get some really interesting examples of stuff that would work. So we're biological. And like I said, I have no business even being alive at 43 given how I've treated my body. But yet your software systems can't compete with me, general, that's pretty sad. We should both be embarrassed. How do we create systems that outlast us? Or at least me. So this sent me down this path. First, I started thinking, okay, how do the systems stay alive like me where I'm not giving a lot of input to make them be better? There is a concept in biology called homeostasis, which I honestly don't understand that well because I'm so not a scientist, it's insane. But homeostasis is a balancing process in biological systems, where different subsystems in the organism play different roles that might be at odds to each other, to balance each other out using an effect called negative feedback. So if something happens that's not bad, something else happens that's not good in a certain amount, something else happens to counteract it. So you have like the brain, which manages the whole thing, the liver metabolizes toxic substances, the kidney heals with blood, water level, et cetera. And it's a system that just balances each other and all the right components are there to make it work. In the same way that we talk about like really good agile development systems, our systems where all the components need to be present to balance each other out or management systems with executives that play different functions to balance each other out, simplified kind of dumb down version. But that's what homeostasis is. And when you are a biological organism, if you're unable to maintain homeostasis, then you reach a state called homeostatic imbalance where these checks and balances are happening and it can lead to death. Good news though in that case is you're already dying. We're all dying at an alarming rate, 50 trillion cells in your body, 3 million die per second. So just look around at the death that's happening around you. It is truly disgusting. It's like a zombie movie in here. But that's interesting. Because here I am, I'm still me. And we all grew up hearing about this and skin cells are falling off and regenerating and all that. We say we're not even the same organism we were but somehow I'm still me. And you can recognize me barely when I come at RubyCon six years later. You still know it's probably me. Or it might be this guy named Chaz that goes to RailsConf that looks exactly like me. It's weird. Have you seen him? He's there. It's me, it's me, it's me. I'm just called Chaz for that. Turns out there, it's not really true that all of the cells in your body regenerate but a lot of them do. And maybe that's a key. Glenn Vandenberg in response to one of my many tweets about this subject said, we learned that software should start small and grow, challenging to replace an existing system that way. So what he's saying is we already know that dealing with small pieces and growing them from there is a good idea but it's very hard to take a whole system and replace it by starting small and growing. And that's why we're running the problems. I also asked the question of a bunch of people many times what are the oldest surviving systems that you use regularly? So I'm trying to get a sense of like old systems that people are actually still using. Probably not because they're forced to. And you can see most of these things are from the UNIX world. Probably partially because that's the best software and partially because the people who follow me on Twitter are UNIX people. Until now because now I work in Microsoft but that's a whole other story. You can categorize these things into two categories. They are either really small components, the UNIX philosophy of tools that do one thing and do them well or they are systems and they are systems of probably made up of a bunch of small components. So stuff like the X-Windows system or Apache or probably a better version of Apache would be something with the Internet. You get the idea. So we've been using systems like this for a long time. Here's a row of them that are still driving down the street and are much older than software and even older than me. They are systems. Each one is a system that is a vehicle and it's made of all these little components that can be replaced. I'm not sure what I'm doing with my hand here. But the slides are so irrelevant they're only a distraction to me. So then I start asking myself, well what if we built software this way? What if we think in terms of the biological metaphor? The idea of regeneration. What is a cell in this context? What is a cell in the context of the software system and what is a system? What are the lines between those and how do you know what to build and when? Well, the cell thing to me is pretty easy to get. A cell is just a small thing. It's a tiny component. And if you've ever had to work on something where you go to a piece of code and you have a task to complete and you look at it and it's tiny, you think, oh good, this is gonna be easy, right? If you go look at something and it's big, you think, oh shit, now I have to read it and try to understand it and who knows? It might not go well. You could watch this talk from RailsConf 2014 by Sandy and it's really right at the beginning. She says something that you pretty much could just not have done the rest of the talk. Not that it wasn't great, but this is so good. This is worth sitting through the whole conference. All of the problems we cause have the same simple solution. Make smaller classes, make smaller methods and let them know as little about each other as possible. Turns out this is a summary of my talk too, but probably we could all just get up on the mic and say that and be done with almost every talk at the conference. When I was working on Wunderlist, which was the last job I had before we were acquired by Microsoft, I made a rule, I was CTO, and I would say you can do code in any language you want, any technology you want and you don't have to ask me. It doesn't matter who you are, you can be an intern. As long as the code is this big or smaller, then I would just hold up my fingers like this. And then there were a few other rules, like it had to work with our deployment system and you had to get someone else who thought it would be a good idea to use that technology. Some simple obvious ones, but you literally didn't even have to ask me about it, though I would prefer to know about it later. Because I realized that even if you wrote like a whole critical service in the Idrith programming language, which probably only 70 or 80% of us are expert in in this room. If someone came along and was on call and there was a problem and the code was this big and they couldn't figure it out or they couldn't get it to compile, they could probably just read it well enough to understand what it did and rewrite it and deploy it. That was the idea. And if they did that, I would like that. And actually as it happens, I wrote with one of my coworkers who was one of the most junior people on the team, I wrote our one and only set of Haskell services. And some of the people on the team were angry that we wrote Haskell services because they thought that was like, academic, elitist, crazy stuff they couldn't understand. And then one of the angry people came and rewrote it and go. And then another person rewrote an enclosure and another person rewrote it in JavaScript and another person rewrote it in Ruby. And as CTO, I did not think that was a waste of time because it was this much code. And they just, they didn't wanna deal with the Haskell. I was fine with that. Turned out the Haskell performed so much better than all the other things that we stayed with the Haskell. What we didn't do though is we didn't destroy the code regularly enough. So to have small things, that's good. But if you wanna really use this biological metaphor, you should be destroying those things. And like I said, I'm okay with it if you destroy something this big and you recreate it, you replace it. They did that, but then we ended up sticking with the Haskell. And I'm gonna talk about that in a sec, but if you replace the cells regularly, it enforces some things. One is it shows that the interfaces to whatever it is are clean enough that the rest of the system doesn't have to be bothered with the fact that you've replaced that component. The other is it proves to you that you can replace it, which I think is an important meta system around software development. But if you don't have a meta system for knowing that you can replace code, then you probably cannot replace code. And at WunderList, we built up this crazily, seemingly over-engineered, in fact, when they did due diligence on us for the acquisition, I got a glowing review, this 40 page report that used the words over-engineered in a positive light to describe what we had done. But this crazy over-engineered system of cells that we're regenerating, and you'll hear more about it. But because this system ran so well, especially the Haskell one, we didn't have to change it. Because when you write Haskell code, if it compiles, it works. That's how Haskell works. So there were never any bugs in the Haskell code because it compiled. But because of that, a problem occurred where the dependencies changed and the cabal system changed out from under it, which is the package management system for Haskell. And now, by the time we did the acquisition, we could no longer actually compile and deploy the code. So we had a critical problem with the system that never had an issue because the ecosystem around it changed. And we learned that we should be destroying and replacing things even when they're not broken and when they don't need to be replaced. It's a healthy meta system for development. And along with this whole thing, as I started thinking about throwing away code and being in a big organization now that has a whole lot of old code, for example, Excel, I realized that developers have a fetish for code. Like, they're obsessed with code. They think code matters. Code doesn't matter at all. The system is the thing that matters. That is the asset you create. The code is a liability. And that sounds weird, but the more code you have, the worse your system is, in a sense. And if you could take a system that does exactly the same thing exactly as well, two different systems, and one has 200% more code than the other one, the one with more code is worse, right? Because you don't want that code. So think about it this way. Think about that system. You need to be creating a system. The code is just part of it. Just plays a part. Back to Dick Gabriel's design beyond human abilities, they also sort of reached this conclusion that cells need to be destroyed sometimes. So he's talking about an external cell can command a cell to destroy itself without releasing toxins. So without destroying anything around it. And that influenced how we deployed as well. So I've been talking about code, but runtime is the same way. In the 90s at GE, we had a server that we had set up. And I remember looking at the uptime and it was like 600 days uptime on this Unix server. And we kind of high-fived and you know, great job. It didn't reboot for crash or whatever. But then we realized this is terrifying because we don't know what's on this and there's no way we could reproduce this. So we started this thing, I coined this term immutable infrastructure to describe it, where we would do the same sort of thing at runtime in our back-end systems. And whenever you want to change a piece of software on a server, you would have to throw away the server and replace it with a new one. In the 90s, that would have been hundreds of thousand dollars every time you did it. So it wouldn't have worked. And six years ago, it started to make more sense with AWS and EC2. Now this is how everyone does it. And by everyone, I mean bleeding-edge nerds that are using Docker, which doesn't seem like bleeding-edge probably to a lot of you, but it actually still is, amazingly. So never upgrade software on an existing node, always just throw it away, create a new one and replace it. And then always deploy. So we had a rule, we were still using EC2 containers at the time, or what are they called, instances. And if I had a graph of uptime on all the instances. So like I used to be proud of my 600 days of uptime. And now if I saw something that was more than a couple of hours, I would get worried. Like why is that server not being recycled and destroyed? Because the system is not healthy in that case. We don't have a meta system for proving that we can replace things. So my goal, and we got acquired so everything got ruined in terms of my nerdy plans, but everything wasn't ruined otherwise. But in terms of my nerdy desires, my goal was to have uptime be less than an hour. Just always be cycling through servers constantly in an automated fashion. But speaking of this immutability, it brings me to the idea of immutability and code. Which is also I think one of the most important things that we can learn from biological systems and like immutability and impermanence code. If we can think in terms of pure functions, think about it, a pure function actually gives you an immutable disposable scope. And you can't create huge long pure functions. It ends up being harder than just creating small ones because you don't have state, mutable state that you can deal with. So pure functions meaning functions that don't have side effects kind of force you into a mindset. And you can do this in Ruby, you have to like enforce your own rules, but thinking in terms of your functions valuable way to force yourself to think in small pieces and create immutable stuff. And there's this whole new wave of functions as a service. Some of you have probably played with AWS Lambda or Azure Functions. StandardLib is a company that my VC firm has invested in and we did it because they were trying to sort of reinvent how you build things based on pure functions with no side effects, which is true with no state anyway. So an area to explore, how about this for a rule? Never edit a method, always rewrite it instead. So what if you had code folding on and you had to enhance your code and the only way to do it, that's your rule on your team is you can't unfold the code. You just have to make a new method with the same name and replace the functionality every time. This is a crazy idea, which is in fact probably not a good idea. But I like the mindset because if this were the rule, imagine how it would change what you would write when you created a method. It would be like Sandy was sitting and they're looking at you at the time, small. So kind of a summary of these ideas, the mutability of a system is enhanced by the immutability of its components. I just sat there silently so you could think about that for a second. Once you have all these little components, they need to communicate with each other. The best way to do this is through stupid, inefficient, simple interfaces. So don't worry about binary protocols unless that's fun for you, of course, Rich. But I've just taken on you, gotta do it once. Do the stupidest thing you can. An example of that is the UNIX standard in and out kind of idea, UNIX utilities. They just read from standard in, they print the standard out and most apps or most tools in UNIX, that's what they do. In our bull mainframe, we had built this bull RPC, which was a really dumb input-output thing using TCP, which we then created generators and all sorts of languages to talk to. It's COBOL programs way. And it allowed us to create this extensible ecosystem around the bull with our custom TCP-IP stack. This was my first Ruby experience at work. By the way, I was generating Java classes to talk to the bull mainframe by getting the COBOL copybooks off the bull mainframe. And that's sort of like the schema of these things, of the data structures in COBOL. I would parse them with Ruby, I would generate Java code and I would commit that to CVS or whatever you're using at the time. And they did that so that no one would know I was using Ruby, but I didn't have to write in Java. In a sense, these interfaces, if you sort of zoom out, are the system. So this is just one stupid, simple concept, but if you think about the fact code doesn't matter that much because it's so small and so simple, then it's the interfaces and the way the components communicate that become the system. So that's how much attention you should spend. You pay to this. Even though I'm telling you make it dumb, I mean that in a good way. Make it as simple as possible, simple and easy as Rich, if you like to say. Maybe a rule to keep in mind in cursive that you can't make out is Ostall's law, which says to be conservative in what you accept. Oh, sorry, conservative in what you produce and what you do and liberal in what you accept. It's a good way to deal with something that can evolve over time. Now something I did that's maybe, I think it's okay at a Ruby conference. Do I have a Ruby? Yeah, I've got a Ruby logo on here. A way that we did this at the WonderList, in the WonderList organization is we said, okay, we're gonna be heterogeneous by default. When I started there, I started talking about Scala a little bit because I'd been watching the stuff that Twitter and LinkedIn had been doing. And people would really snicker in the same way that in the 90s people were snickering at me when I talked about Ruby, the GE. But what I wanted to do was get everyone into a mindset where they weren't dealing with these monolithic processes, monolithic memory spaces, monolithic sets of languages and libraries that all bleed together. All of these boundaries of creating small things that have to communicate through clean interfaces, well, it's a lot harder to produce tight coupling between your Haskell code and your Ruby code than it is to not produce tight coupling. Like you have to go out of your way. You have to be Aaron Patterson with his PHP interpreter at Ruby, for example. So we didn't use all these languages or platforms or whatever they are, but by the time we were acquired and stopped working on our backend platform, we had something like, well, we're 13 backend languages, including Haskell, as I said, Rust, Go, Ruby, Clojure, Scala, Node, et cetera. And I think that actually contributed creating the system that we wanted to create very heavily. So this is Joe Armstrong. If you were at RubyConf 2006, you probably learned about this video, but assume that things are going to fail. Failure in a system like this, where you want things destroyed is a virtue at runtime, but also in your code, which leads you to the concept of meantime between failure optimization versus meantime to resolution optimization. A lot of people spend a lot of time thinking about how can I never have a failure, which is probably impossible in most cases. Rather, if we spent all of our time thinking about how to recover from failures and forcing them, we would be in a better state. One classic example of MTBF, probably MTTR2 is testing. So this slide probably speaks for itself. Tests are a design smell. I'm surprised, didn't know I'm booed. I guess we've gotten past that point now. We're too old to worry about it anymore, but there was a time at a Ruby conference people would have been upset about this thing. Tests are coupling. Tests reach into your code in a way that actually stops it from moving forward. I had at least one conversation yesterday, and every time I go to a conference, I talk to people about their project where they end up spending, they're trying to upgrade a rail version, for example, which was the most recent one I heard about, and they spend so much time screwing around with a test suite that it is a detriment to them. It doesn't help them. So we've forgotten why we test things. What if your code, like I got to pair with Kent Beck once, and we sat down and I wrote something, I don't know what it was, it was in Java, and I said, should I write a test for this? And he said, you should write a test for anything that could possibly break. And I said, well, what about this? And he said, could it possibly break? It's like, I don't know, you know? I'll write a test for it because I thought he was trying to trick me. But I think that's a good, it's a good way to think. And of course, he didn't want to tell me because the idea is not to tell me when to write a test, it's to give me intuition about when. Well, what if your intuition, your code, triggered the intuition that the answer is always no, because your code's so simple it just can't break? That's what I mean by this. The other thing though is, don't let your test be an anchor that you have to drag around. And maybe it's more important to monitor the runtime behavior of your code than it is test it. Because you do all this shit to make your tests work and like simulate reality, but they aren't reality. You can't catch the errors. How about catching the errors in production and getting to where you can fix them really, really quickly? I'm sorry, bad words, palm freaks. Experience the worst case scenario as soon as possible so you don't have to face it or you don't have to fear it. When I joined WonderList, I think it was the second day I was there, I got access to the back end systems and I started just deleting servers from the clusters until it crashed. And at that point they had moved me from the United States to Berlin to be their CTO. And I'm sure they were thinking, well, it's too late to send them back now. But I did that slowly and you would see it getting worse and worse. You would watch the graphs and then it would crash. And then the team that had previously been paralyzed by fear had to jump into action with me to save me from my own idiocy and to make it come back. And the users were really complaining. We had problems, people weren't able to log in. So it probably took them a little too far. But we were in a situation before I got there where we had finally gotten the system to where it would sort of work and no one wanted to change anything if they were afraid of it. And there you have stasis and you're screwed when you have stasis. You will not create a system that survives. So I got us in this situation where we got used to dealing with it. And I think that's a pattern for the future. Back to this homeostasis idea, I mean idea of regulation. Really, my grand experiment is paused right now and it hasn't gotten to this point. But I wrote some papers about it because I thought that's what a CTO is supposed to do. And I was imagining, like here is a, this is a generic diagram of a system. But think about the different ways that if you work on backend systems that have scalability problems, for example, where you have components that crash because they're under too much load, database can't handle load, et cetera. If you've got measurements on everything, you could say like, okay, the database can't handle it. The system is going to regulate itself and it's going to slow down the incoming traffic. It turns out that the way that you deal with a distributed system like this and you're trying to work on scaling issues is by doing these things manually. So this is the first sort of obvious step to simulating homeostatic regulation. You can imagine a lot of different things though where the system watches itself and adjusts its own components, kills things, slows things down or officially speeds things up in order to take care of itself. Even to create problems that it can then react to. So some sort of older examples there is the chaos monkey which is sort of famous from Netflix which would just go around because back in the day when they started using EC2, it was controversial, the AWS thing, because people would say like the instances would die randomly, et cetera. So they made the decision to use it and they said, well, if the instances are going to die randomly, we need to make sure they're dying a lot so that our system can deal with it. So they created this thing that would just cause problems and make sure that the system could run smoothly all the time. Pinterest did something cool a few years ago. AWS has a concept called Spot Instances on EC2. So normally, if you want to start a new server, there's a fixed price for a certain type of server. You boot it up and you pay that price and then you shut it down and you stop paying the price. Spot instances give you a chance to bid on a secondary market for computing product for the same types of servers and you set a minimum and maximum price and when you get it, it starts up the servers and when the price changes and you can no longer afford it, it just kills them. Which is weird if you think about running production code because it's gonna die. Pinterest switched entirely to Spot Instances and then they built their systems so they could deal with all these fluctuations which I thought was a great way to both force the concepts that I'm talking about and save a lot of money because it's much, much, much cheaper. So we started doing experiments with that too and we were saving hundreds of thousands of dollars a year on paper doing this. A lot of what I'm talking about with this cellular idea is encapsulation. So for this to work, the services have to encapsulate their own data. They have to own the data. And I think encapsulation is decoupling. It's sort of an obvious thing to say. It leads to stuff like lots of small databases instead of one big one. Maybe some of this stuff is obvious now. A lot of people then say, well, don't you have referential integrity? What do you do about that? They say, nope, we don't have that. I would rather have this. I would rather have decoupling and the ability to evolve things separately. The whole point of this is that I don't want any of us to be doing these rewrites anymore because it's just sad, like I said. I mean, if you want to just be a capitalist, it's a waste of money. But I think it's much worse than that. And here we are, hundreds of people in this room who are probably working on things that will not survive more than a few more years if we're lucky. And we spend so much time on it. It's just not worth it. Life is too short for that. A few years ago, I went to Nordic Ruby in Stockholm and a regional break weight at the talk. And I don't remember anything except for this line. That's how bad the old aging memory is. Ruby has beautiful coupling. That was sort of the main point, I think, of this talk. And it's true. Part of what I loved about Ruby when I got into it was I could do crazy stuff. No one would stop me. It's like, Dave Thomas used to say that Java is the blunt scissors of programming because it tries to protect the programmer from everything. And Ruby doesn't do that. And Ruby, you can redefine everything in the system at runtime and cause all sorts of crazy crashes and make it impossible for another programmer to come behind you and work. You can put something in a path somewhere that gets loaded that they can't find that redefines how strings work. That's it. They'll spend a long time trying to figure it out. And it's true. That's part of what we love about Ruby. Flexibility. But as I think about Ruby's place in the world now and the responses to my question, what does the Ruby community need to hear in 2017? Some of it was stuff like, well, why would I choose Ruby over Python at this point? Really kind of snotty answers like that. And a lot of them were about Python. Python won the data science thing and therefore is gathering steam around that. And other people said, node one. And in both of these I say, well, what did it win? And that's sort of a relevant point too. It didn't win anything. We're still here. We still love Ruby and we still use it. But what if Ruby was the language that when people talked about it, they said, if you build things in Ruby, there are tools and practices in that community. And I would say tools first. Tools and parts of the language. I don't know what they are, but they're probably not the ones that have been requested since 1999. What if Ruby was the community where when someone builds a system in Ruby, it's probably never gonna have to be rewritten. Never going to be thrown away. At least not in mass, because the system will survive. So I hope that that's an inspirational idea.