 Hi, I'm Ted, I work as a Ruby developer in Thinkables and when I was a kid I used to spend a disproportionate amount of time thinking, at least according to the social norm of that time and I still remember my grandmother's worried look when I would spend a whole day in bed staring at the ceiling but I would slightly dismiss her, explaining that I'm just thinking and I will think about all sorts of things, I would think about space, the universe and technology which would freak some people out because I would ask questions that you don't normally expect to get from a six-year-old like how much sunlight does an average person consume in a day? However, some of the questions I asked are questions that are common to most of us, like what happens after a person dies? Perhaps sparked by the death of a pet or a relative, it's a question that marks one of our first real existential crises and it comes with a flurry of follow-up questions. Of course we can't really know what happens but we can kind of imagine the different possibilities and the possibility that historically and still to this day is by far the most unpopular is that maybe we simply cease to exist, non-existence. It's unpopular because it's very hard to think about, thinking is dependent on our consciousness and our consciousness is somehow synonymous or at least tightly coupled to our existence. This is an infinitely interesting question from a philosophical perspective and it marks an important and formative moment as we grow up. It's important that we can talk about it in our natural language but is it equally important that we can talk about it in programming languages? In Ruby for example we indicate the absence of an object with milk and we've simply come to accept this as a matter of fact, sort of as an inescapable consequence or an artifact of object orientation. Yet there is something odd about it, almost eerie, it seems out of place, almost like it was an afterthought. And as a matter of fact as it is exactly what it is. Tony Hoare, the first implementer of the null reference, actually never intended it to be part of the language that he was creating, elbow w. But later said that he added simply because it was so easy to implement and he later came to call it his billion dollar mistake. Now has then been perpetuated for a couple of reasons. Initially because of familiarity, to win developers over to your new language, it needs to be reasonably familiar to the languages they're already using to ease the transition. And because null is one of the easiest things to implement, it was an easy win. Because of that null made its way into C and because it was in C, it made its way into C++ and because it's in C++ it made its way into Java and C sharp. And once null starts appearing everywhere, there's another reason people start adding it for interoperability. Of course null made it into databases and it can be useful to have types in your language that roughly match to the types that you have in your database. So is there something fundamentally problematic about modeling something that doesn't exist in a programming language? If you're anything like me, you might have been programming for a very long time without ever really asking this question and then again I'm a person who thought a lot about non-existence outside of programming. So let's have a look at some of the pitfalls of null references in programming in general and in Ruby in particular. Starting with an elephant in the room, something that I think we can all relate to, is there no method error because of an end reference. And I think still to this day about 90% of the errors I encounter are actually of this type. And in Ruby, of course, we consider nil an object like any other object. So we're kind of hiding the null reference error of other languages under this umbrella of the no method error. But shouldn't we really be calling it a no receiver error? I mean, isn't that what we're actually trying to say? Saying it's a no method error is like me saying I called no one today and they didn't pick up. It simply does not make much sense. Even worse than a runtime error is that we can have silent errors. Because nil is an object, it inherits from basic objects. And because of that it implements a number of methods, including the comparison operator. So we might accidentally be comparing things to nil in our code base. We might even be relying on this behavior for a code to work. And I'll get back to this later in this talk. Next, it adds quite a bit of mental overhead. Ruby is a duct type language and working with a duct type language requires a deeper understanding of the code that you're working with. What inputs are expected and what possible outputs can we expect? Keeping track of which methods can return nil is a considerable mental overhead. But failing to do so leads to the no method errors that we saw before. We of course used the bang method to indicate a dangerous method, a method that throws an error or mutates its receiver. But thinking about it, if 90% of our errors are due to nil references, should it really be marking the methods that can return nil? So far I have not seen anyone write code like this. But as long as we can maybe we should. Or to put it frankly, maybe we should write it like this. We can either receive a duct or we can receive something different entirely. And this really brings me to the last point, which is the most important point to be. Nil doesn't really seem to model anything useful at all in our applications. It doesn't even seem to model what it's supposed to model, which is the absence of an object. We can define a simple method that takes a single mandatory argument. Mandatory meaning that the object must not be absent. Still we can pass nil to it, which is supposed to mark the absence of an object. And to me, this does not make sense. And I would be ready to say that it might even be a bug in Ruby. Even worse, it's inconsistent. In some context, the absence of an object is simply represented by the absence of an object, like inside an array. We don't write an empty array like an array that has nil in it. And this inconsistency certainly just makes the problems worse. Don't worry though, Ruby is by far not the worst offender when it comes to null references. The price for the most derpy null scheme actually goes to objected C, which has four different types of null that we need to keep track of. But there are also languages that don't have null references at all. They just forego it. And now you might think, yeah, these are probably the fancy functional languages that everyone is talking about. And it would be partly right. Haskell has a fully functional programming language. Doesn't have null. But some of the other languages, like Rust, are multi-parallel languages like Ruby. And it seems that the people at Apple have recognized the pains that all these nulls have caused in objected C. And in Swift, we don't see null at all. So by now, I may or may not have been able to convince you that representing the absence of an object with some other object is really not that good. Or you might just be thinking that I'm up here complaining. The difference, of course, between complaining and acknowledging a problem is to also suggest a solution. So there are two approaches we can take to the problem that's neocrossing us. The first one is to try our best to hide the problem. The second one is to adapt our designs to work around it. So let's have a look at the first option. I think Ruby sort of recognizes the problems of Neo because it implements a lot of convenience for you to deal with it. The most obvious being the Neo predicates. But this is basically just a type check. And the purpose of a dynamically typed language is not to defer manual type checking until you run fine. That sort of gives you the worst of two worlds. And type checking is simply something we don't want to be bothered with. But other types also implement convenience methods to deal with Neo. So we could see the inconsistency between the absence of an object outside and inside of an array. And to deal with this inconsistency, array implements the compact method, which simply reads your array of all Neo values so that you can get an actual empty array. We also recently got the deep method for arrays and hashes. But that sort of just differs the problem until deeper down in your code, which is already a bad thing because it will be harder to find the actual source of the problem. Third party libraries also recognize the problems that Neo causes, and they implement their own methods to deal with it. The most popular ones being the blank and present predicates in active support. The lonely operator. Neo in Ruby 2.3, when I first saw this, I thought this whole place was super convenient. But thinking about it a bit more, the only thing that this does is hide the problem. And it hides the problem by making us feel better about our code. So taking this example, we can see there's quite a fair bit of duplication in there. We didn't talk that we should not repeat ourselves, right? We should be driving. Enter the lonely operator. We can write the same thing. Think of how much better we feel about this code. It's short and it's kind of a bit of one line. It doesn't have any duplication in it. But did we actually fix anything about our code in the first place? Or did we just tackle it under the carpet by making it look really neat? I came up with a name for this code smell, by the way, when I was writing this code. I like to call it Nils All the Way Down, referring to the story about the world on the total scale. Next, Nils implements a bunch of type versions that let you turn Nils into other basic objects. Similarly, when I learned about this, I thought this was super convenient. But the only reason it is convenient to me is because I have a lot of troubles coming from Nils in the first place. And we need to ask ourselves, is it sensible? Sorry. Is it sensible that Nils turns into zero? Well, it seems intuitive at first. Of course, zero is the numeric representation of nothing. But from a mathematical perspective, it doesn't make that much sense. Because, well, zero is the mathematical identity of addition. So it works there. But it certainly isn't the mathematical identity of multiplication. So we can't shoot zero as no hope for arithmetic. And the absence of a number cannot easily be said to be zero. If we take the example of the phone call we had earlier, we can easily divide any number by nothing. Because we simply don't divide it. But if we suddenly say that nothing equals to zero, then we're in pretty trouble. Nils is also thought to be false when you spin a predictor. So again, let's ask ourselves if this makes sense. But if you text someone you're interested in and they don't reply, I think it's sensible to assume that they're simply not interested. Or in other words, that the absence of a reply indicates a negative reply. But if we encounter someone on the streets who looks to be in distress, and we ask them if they need help, they might not be able to reply, but it's still not safe to assume that they don't need help. Or take this scenario, for example. Assumptions are extremely dangerous, especially when we let computers make them for us. So instead, we should work to improve our designs to work around the inevitable fact of Nils. So this in turn falls into two categories of tasks. First is we need to be good citizens and not spread NIL around our own code base. The second one is dealing with NIL wherever it might show up for whatever reason it will show up. So first I came up with something I just dubbed the rules of non-existence. And these are very simple rules. And they are meant to isolate Nils in your application as much as possible. And they can dramatically reduce the number of errors we get in our code. The first one is that methods that return basic types should just always return that type. If the user expects to get a string back, in the worst case, we'll just hand them an empty string. We will not hand them NIL and expect them to deal with it. Of course this can sometimes happen unintentionally, like if you forget the else close to your it statement or to your case statement. But there are rules that can help us with this. So rule book, for example, will look through your code and indicate to you when you forgot to put the else statement. The second rule of non-existence is if we have a method that relies on another method that can return something or NIL, then the first method should not return NIL. And the point of this rule is to give every NIL in your application a start and an end. And NIL should never move farther than one method fold. And this lets you prevent the NILs all the way down problem that we saw earlier. Needless to say, we should all exercise our code-using tests. So I talked about expectations earlier, and there's really no big coincidence that the syntax we use to write tests start with expect. We should all write tests to make sure that our expectations are met. There's another quick trick that we can use. So Ruby provides some predicate methods for some types that usually only that type will respond to. For example, we can send the zero predicate to a numeric type, and it will tell us if a number is zero or not. And this allows us to be much more explicit about our expectations because we really ever only want to compare one number to another number. And it prevents us from accidentally comparing things to NIL or even other types by throwing a run type error. And since no method errors are what we get anyway from NIL references, I think that's much better than a silent error, especially when dealing with numbers. So we take this extremely contrived example as a proof for that. Next, we can ask ourselves if we have a law of denuclear violation. And I won't go into what that is, but it's really an extremely simple concept with an extremely fancy name. And you should go and look it up if you're serious about programming. Next, we can use another object to inject meaningful domain objects where we would otherwise have NIL in our application. So another object is a more specialized type of NIL that makes sense in your application context. If, for example, in our various templates, it is cluttered with statements like, if current user, then we could probably benefit from injecting a guest user object. And we can even implement signed in predicates on both user and guest user. And it will still be much more semantic than checking merely for the presence of a current user. What does it even mean that a current user is not present? It's nonsensical. Unless our application is confronted by a bootlegate. We can use the naked class like that, or if we feel very grave, we can monkey patch one of the existing classes to provide one or more null objects for itself. As we saw with the arithmetic example, there might not be a single unambiguous null object. It might depend on the context that is being used. Another example of this could be a bolting application where a bolt can be either blank or missing. We can also borrow from the fancy functional programming world and use something like the optional pattern. In Ruby, there's a library called possibly Ruby, and it's used to look something like this. And this might seem a bit esoteric to a Rubyist, but it's essentially a functor. And if you have no idea what that is, it's basically a wrapper that wraps any object or the absence of an object, and it will reply to map. Whenever you call map on it, it is guaranteed to return another wrapper which in turn responds to map as well. You can fall back to a default value you see for yourself. So, before I started working on this call, I decided I should ask what maths has to say about this. After all this, much, much smarter than me. And although I might have spent more time thinking about non-existent in the S, he certainly has spent more time thinking about programming languages. So, I reached out to him on Twitter. Unfortunately, I forgot an old Swedish saying that goes, the way you ask the question is the way you will be replied. So, you just reply with yes. Well, I can now know that it was not a mistake, but I can't know for what reason Neil is in Ruby. And I think at this point I'm just too afraid to ask. I would like to think that it is for some deep philosophical reason that Ruby can understand and comfort us when we're in distress. But until someone can convince me of that, I can't help but think that Neil is just a brain part that has been modeled into a programming language. And of course, it is said that we mock what we love, and I put in the center and talk about the quirky weirdness about Neil without actually really loving Ruby. Thanks.