 My name is Michelle Martens. I'm from Argentina. I've been using Ruby since 2003. And at that time, there were no big Ruby libraries, like Rails, or ArcSpec, or Bundler. And as far as I can remember, at that time, it was all about small libraries that highlighted how expressive Ruby is. And I think that created a big impact in me, because I tried to always create small tools that solve very specific problems with little code. My username on Twitter and GitHub is Sovran. Most of the code I write is open source, and it's in my GitHub account. I have a company called OpenRace, which provides very exhausting races and in-memory database. And this presentation is about human error and how it relates to the concept of mental noise and simplicity. I guess we are all familiar with human error. We make errors all the time. And when we are developing, when we are programming, we can hit the wrong key, or maybe we can forget a comma, or write the wrong algorithm. And it's a very forgiving environment, because we have plenty of time. There are no big consequences. And we have the help of our text editor, our interpreter, compiler, whatever, so we can, in the worst case, just run the program, see if it works. But something completely different happens when we're dealing with an issue in production, and our website is down, and we have an emergency. And a few years ago, I realized that I had never trained for that kind of situation, dealing with an emergency. We are usually racing against time. Everything, every error will work. Yeah. Hello. Hi. Doesn't work? Hi. It was upside down. That's a technical error. But we will learn that behind all human errors are a sign error. So my jacket was badly designed. Very better. I was talking about human error in general. So yeah. I was talking about this idea that we make errors in development, and there's no big deal, because we are not under pressure. But when we are dealing with an error in production, then maybe our customers are complaining. Every error we make can be critical. And what happened to me was that I realized at some point that I had never read anything about dealing with those kinds of emergencies. So I started investigating. I read a lot about accidents in aviation and power plants. It's fascinating. It's addictive. And I discover a lot of things about dealing with crises, and especially there's one thing called crew resource management. I think there will be a presentation tomorrow at the same time about that topic. And in everything I read, there was always a reference to this book called Human Error by James Reason. He did a lot of research in that area. And historical research, and also he proposed a model to classify human errors based on our behavior. And he also proposes some ways of preventing whole types of errors and things like that. It's very academic. He makes a lot of reference. It's like a paper, but with 300 pages. But if you are into the topic, it's very interesting. Another book that was very interesting for me was the design of Everyday Things. The author, Don Norman, he's a psychologist, but he's also an engineer. So a lot of things that he says apply directly to what we do, to the things that we build. And the main idea of that book is that behind every human error, there's a design error. Because we are experts at making mistakes. It's our thing. And a good design should account for that fact, should prevent us from making silly mistakes, and should help us detect the error and recover from the error. So I really recommend that book. And I can cover everything that there is in those books, but I really recommend that you read them. And I want to focus on something that is very related to this track of Let's Code. And it has to do with building accurate mental models of the systems we're building or using. So a mental model is the representation we have of a system. And it has to do with how a system works, how it's internally designed. So one first thing to clarify is that it's not the same to know how to use something or knowing how it works. In order to know how it works, if we are talking about software, it means that we need to read the code and understand what it does. And if we do that and we create an accurate mental model, we will be less likely to use it in the wrong way. And also when something goes wrong, we can instantly know where to look for a problem. So it's very valuable. If we only know how to use a tool, even if we feel we are experts, when something goes wrong, we have no idea what went wrong. We don't know where to look for the problem. And in order to fix it, we have two ways. One is trial and error, which is terrible. The other one is to learn how it works, investigate, and create an accurate mental model. And only then we will be able to fix the error. So if we are in this emergency scenario, that's the worst possible time for trying to understand how a system works. So in any case, it's something that we should plan in advance, learn how our tools are designed. And for understanding how something works, our main barrier is its complexity, the complexity inherent to a system. I have an example of some historical process where the complexity was reduced in something, not related to programming. It's about chess. So this is how people used to write chess moves 400 years ago. It says the Viking commands his own knight into the third house before his own bishop. It's not the most handy way of describing a chess move. And they knew that. And at that time, they didn't used to write the full games. They just used to write the first three or four moves. And sadly, we don't have full game records from that period. But over the years, they realized that there was something good about keeping track of their games. So 100 years later, they had compressed the same information in this new way, new notation. We have some records from this period. 100 years later, it got compressed even more. And there are many more games from this period. Less than 100 years ago, we had this notation. This is the most compressed version of the original one. And then there was a change in paradigm. They no longer wanted to compress this even more. And they switched to a version of using a coordinate system. So right now, we use this notation. N is the name of the piece. And F3 are coordinates in the chess board. And thanks to this notation, we have lots of databases of games. This is the language of chess, so people can discuss the chess moves using this language. As a result, the general level of chess improved. Another example, when I started working with Ruby, we used to write tests like this. Here it's clear that I'm asserting that 2 plus 2 equals 4. Then some years later, our spec came out. And we could write the same with this other language on top of Ruby. Some years later, it changed a bit so you could write even more to get the same result. And then we got a change in paradigm. And we no longer wanted to write the test with code. And some people started using this version. And here, you have to read everything to understand what's going on. It's, in a way, we're making the wrong trade-offs. Because in order to write this, you need more time and more effort. In order to understand it, you need more effort. You need more computing power to run these tests, more code behind. So we are sacrificing performance and clarity. And in exchange, we are getting nothing good. So this is an intuitive approach to complexity. Now we can talk about some properties of the complexity to understand the boundaries. What's the minimal amount of complexity of something? So for example, let's say we have a constant function that returns the number 42. We can define the body of this function as just simply, for example, 42. The function would be correct. But we can make it more complex. We can define it as 21 times 2. We can make it even more complex and define it as 1 plus 1 plus 1, 42 times. And in fact, we can make it as complex as we want. We can add infinite amount of complexity. And as long as we return the number 42, our function will be correct. So we can formalize that the maximum amount of complexity is infinite. If we go in the opposite direction, what's the minimal amount of information we need? In our example, it was just the number 42. But formally, it could be the common order of complexity of the object we want to describe. This is from algorithmic information theory. The idea of the common order of complexity is to determine the smallest amount of information that we need to describe any object. So here's an example. Let's say we have this string. It's a sequence of a's and b's. And each object is a description of itself. So in this case, we know that at most, the description would have this size. If we measure the size of the string, we get 14 characters. But we know we detect a pattern here. And we know that we can express this with less information. So we can try this approach. It's totally dependent on the language we are using for describing the object. This is similar to Ruby. If we measure this new version, we get eight characters. So we were able to compress the object. So the common order of complexity is a way to determine if we can compress the object. Formally, it is the minimal description of an object. So we know that we can increase the complexity infinitely. And we have a limit of complexity. Those are good parameters to keep in mind. And we will be talking about software complexity, which has to do with the relationship between a program and the programmer. Specifically, how hard it is for a programmer to understand a program. And we are not talking about computational complexity. Big O or P versus NP, we are talking about just something more psychological, which is how hard it is for us to understand something. I will mention three metrics. There's no perfect metric for this yet. But one idea, described by Wolverton in 1974, was just to count the lines of code. And this is the, I mentioned this one because it's the most controversial topic in the history of computer science. Yeah, it's obviously too simplistic. It doesn't capture the essence of what we think about when we think about complexity. But it was very popular at some point in order to estimate effort and cost of software. The next one is a very popular tool. There are tools in every language for measuring the psychometric complexity. It was proposed by my cave in 1976. The idea is to count all the possible execution paths in a piece of software. If we have an if, that means we have two possible execution paths. If we have any fails, that's four execution paths. So we count that number, and that's the psychometric complexity score of a program. And the third one is code volume. This metric is also very popular. It has to do with counting the total number of operators in a program, the total number of operands, the number of unique operators, unique operands, and with a formula calculate a number, which is the code volume. And these three metrics, even if they seem very different, they correlate very well with what we can experimentally measure as the difficulty of understanding a software. And there's this other paper from 2007 that proves that there's a positive linear correlation that is almost perfect between complexity and lines of code, complexity using these metrics, like cyclomatic complexity and code volume. So it means that even if we don't want to measure the complexity, if we don't want to install any tools, just by counting lines of code, we get an estimation that is almost perfect. If we reduce the lines of code of a software, we can be confident that we are reducing its complexity. And with these metrics, we are not talking about the clarity of the software. And we're assuming that we are talking about clear code, readable code. In fact, you can write very complex code that is readable and clear, or you can write code with lower complexity that is very obscure. So we assume the clarity is constant and a good clarity, the Ruby paradox. So the paradox is that we have a language that is extremely expressive. Ruby makes a lot of sacrifices in order to be expressive. And we usually need a fraction of the code that we would need in other languages in order to describe the solution to a problem. The paradox is that all the tools we produce are extremely complex. We have arrays, bundler, aspect. The most popular tools are the most complex ones. But that doesn't mean that it is impossible to write small tools. It's not that Ruby has a defect that we can only produce very complex tools. In fact, within the same community, people are solving the same problems with tools with very different complexities. Here is one example. We can use for testing a library like RSpec, or Minitest, or Qtest. And RSpec has 100 times the lines of code of Qtest. And they solve the same problem. Another example. I was very generous with ActionPack here. I didn't count all its dependencies. But you can see that for routing requests, you can use something like ActionPack, which has 20,000 lines of code, or Sinatra, which has 2,000 lines of code, or Cuba, or Cuba, which has 200 lines of code. And people are actually using these tools within the same community to solve the same problems. And it's not that a tool will grow in complexity from one day to the next. The first version of ActionPack had less than 10% of the code it has today. It was a bit smaller than Sinatra. But some tools in our community encourage the contribution and reward people that add code to a library. So in a way, the complexity arrives in tiny ways. And if you leave the door open, that's what you get. There are many more examples like this. I wanted to show you one stack that you could use to build web applications. Today, Cuba is a router, 200 lines of code. Shield is a library for authentication. It has less than 100 lines of code. And you could use it instead of device, which has 60,000 lines of code. Malone is a library for sending emails. You could use it instead of ActionMailer, for example. Mote is a template engine you could use instead of ERB or Hamlet or whatever. OM is an ORM for Redis. So if you're willing to use Redis as your main database, you could use OM instead of ActiveRecord, for example. And this is all over under 1,000 lines of code and you get your stack. My latest project is called Syro. It's a routing library similar to Cuba, but it gives you less freedom because it's like a pedagogical tool. I realized that that was a good approach for somebody that is just learning the language. But it turned out to be very powerful. And I think it's the most memory efficient and the fastest routing library in Ruby. And I built my latest project using Syro. So it's powerful and encourages a modular approach to building applications. I also wrote a tutorial and a demo application. The demo application has user sign-in, sign-up, account activation, send emails, render templates, like everything you would find in a normal application. And the application has 200 lines of code. So it's like a starter kit for using simple libraries. If you take that approach, then you can replace any part of your stack. You will get the power of changing parts of your stack. And of course, if any of you decide to use it, I welcome any feedback. The idea is for somebody that barely understands Ruby to be able to build our application with this. Something in common to all these tools is that they don't change much. This is from the contributing guidelines of some of the libraries I create. The idea is that if a problem doesn't change, and if you don't find a better solution, there's no reason for changing the software. We are very used to selecting the tools we want to use based on how recently they were updated. But Shield, for example, the one I mentioned, was last updated more than a year ago, but it works fine. Around these projects, there are no core teams of active developers, because there's no active development at all. And in fact, I think that a tool that changes all the time forces you to learn all the time. You can create an incremental model of something that's constantly changing. So that should inspire fear, actually. If you see something that changes all the time, run away. In closing, I wanted to share with you. This is an article from Leslie Lamport. He made many contributions to distributing computing, distributing systems. And he wrote this article where he compared a program with a car. And he said that, for example, a car needs maintenance, because it's parts we're out through use. But a program is a mathematical object. It doesn't need any maintenance. If you have an if, else you can use it a million times, and it will remain the same. A program has a meaning, and you can prove if the meaning is correct or not. But you cannot prove that a car is correct. You have to start it, see if it runs. And if it runs today, it doesn't mean it will run forever. And if you notice, I mentioned two words that we use all the time to refer to software, maintenance, and run. So we have continuous integration to see if our program runs, and we have maintainers. And we treat a mathematical object as if it were a physical object. He later wrote another paper about the future of computing. He said that we have created tools that are so complex that we kind of gave up trying to understand them. If we think about something like Rails, for example, it has over 250,000 s of code, there's no way to read all that code and create an incremental model. And especially if we consider that it changes all the time. So when we are dealing with systems that we don't fully understand, systems that are more on the biology side, we tend to have irrational behavior. And we can make decisions that we don't make with systems that we fully understand. Some people, if they don't understand how their body works, they can use omniapathy, for example. But they wouldn't use omniapathy with their cars, for example. And with software, something similar happens. We can select a tool or reject a tool because it doesn't have enough stars on GitHub. And that's irrational. The rational thing would be to read the code, understand what it does, and see if it works for your use case. Or we can check the recent activity, like I mentioned. And I wanted to bring up this paper because it talks about the future of computing. And we are all programmers, so it's on our hands. And the idea is to build a culture of understanding code and reading code and strive for simplicity, which is the best way we have to cope with errors. If you like these ideas, we have this website where we have pointers to an IRC channel where we discuss these ideas and a subreddit where we share the papers I mentioned. I invite you to join us. And that's it. Thank you. OK, the question is, if I want to use something from Active Support, for example, some handy tool, what do I do if I use that library or if I extract whatever I need to use? That never happened to me, so I don't have a clear answer for that specific question. But what I usually do is I try to understand the problem, distill the essence of the problem, and find a solution that works. Usually when you find the right data structure and the right algorithm and you solve the problem tightly, that's something that would last for a very long time. So yeah, in a way, I would say, if you detect a solution within a huge library, extract it and find the right approach, find the essence of the problem and solve it. OK, the question is, Ruby has traditionally been very, very friendly to new users because it has frameworks that you could use. And how do we get from there to having people use small tools and simple systems? So from what I could see, which is not the universal answer, but I met many people that participated in Rails Girls, Summer of Code or Rails Girls events, Rails Girls events, things like that. And they didn't understand a thing. So they created an application with Rails, but an extreme case is one person that ran away from programming for three years because she thought she wasn't good at that, just because she finished the application but she didn't understand a thing. And I think I don't know how many of us learned to program with a big framework of that complexity. It wasn't my case. And I know a lot of people that are working with Rails that want to learn how to use Ruby. So they are proficient, I guess, in their environment, but they realize that they don't know enough. And I think one way to improve that is what I'm trying to do with Syro. The tutorial has 13 steps that are very gradual and easy. Then you have the demo application where you get to see how these small tools interact. And you can read it in less than a day. You can read it in a couple hours, actually. But I think my idea is to bridge the gap with that kind of approach. Another idea is to force the people to read every line of code they add to their program. If they know how everything works from day zero, that gives them a lot of confidence. I think those should be the first steps for somebody who tries to get into the industry. So the question is that given that we're in an industry dominated by these big frameworks, how do we do to transition to this new approach, this different approach? I think maybe there's no formula, no magic formula. I know people that try to, they say, yeah, I will use metal in rails. And they start building CUBA applications, or they can use Syro applications, or RAC, or whatever. But they try to sneak these small tools into the race project. Some people manage to convince their clients that this is better. If you can show your client that your application has a tiny fraction of the code that you would need with rails and that the performance is 10 times better, it's like from the economic standpoint, it's also a good argument. I think, yeah, no universal way, but whatever works. Let's get really, I don't know. What do I think about metaprogramming? OK. I think that's something very interesting when you start using Ruby. At least when I was amazed at how flexible it was. But over time you realize that metaprogramming has a performance penalty and cognitive load penalty. So you have to try to make it easier for you to understand and easier for the computer to run. So I would say metaprogramming is something to avoid. Any other questions? Do we have time?