 So yes, if you want to know details about the benchmark Big Ogem, I did give that talk at RubyConf. It's on the internet somewhere. But this talk is a little bit more vague in nature. I'm going to be talking about numbers, big numbers and small numbers and a bunch of numbers in between. And more importantly, how our brain deals with those numbers. So first, I'm going to talk a little bit about something called number sense. And that is an idea that certain animals have an innate understanding of numbers built into their brains. It might not be surprising to know that animals such as elephants, dolphins, and great apes have been shown to have number sense. But it's not only restricted to mammals. Number sense has been proven for birds, crows, or one example. And even insects have been shown to have innate number sense, such as honeybees and ants. It has been shown that these animals that have an innate ability to distinguish different numbers, such as one versus two, three versus four. But it tends to break down at about five or six. Though there is some ability to distinguish numbers with a larger difference, such as the difference between eight and 12. And of course, humans have number sense as well. This is not surprising. But what you might be surprised to find out is that ours is not measurably better than that of most other animals. Human groups that have not yet developed finger counting have a hard time distinguishing quantities above four. So this comes to our very first lie that we think about ourselves, which is that our brains inherently understand numbers. In fact, it's very limited in scope, where basically it can handle one, two, three, and four. And above that, we must learn how to count. So counting is something that humans have had for a very long time. Finger counting is the primary first step towards counting. But we have other ways of counting as well, such as using stacks of pebbles, notches on sticks, or knots on a rope. And this is a way for us to distinguish numbers larger than four before we have words to describe these numbers. So we can use pebbles to keep track of our herd of sheep without actually knowing the exact word for the number of sheep that we have. Next, of course, it does seem to be very useful to have names for the numbers themselves. So that's kind of the next step in human evolution in counting. However, the very first way that humans developed names was very limited. And we started out by just having names for numbers one, two, and then many. And this still exists today in tribes such as the sand people of Namibia, various aboriginal tribes in Australia, and the Paraha tribe in Amazon. From there, many human tribes began having different names for different number words. So the Thymshian language, which is from a tribe in British Columbia, has different sets of names for the numbers depending on what thing you were counting. And before we spend too much time thinking about how silly this sounds, let's notice that the English language bears the memory of a similar history from our past. So that brings us to our second lie, that counting things is easy. Abstract number counting was, in fact, a very difficult thing for humans to discover. And counting takes a lot of mental energy. Now that we have learned how to count and then we have words to describe the various numbers, we can begin creating numeral systems to help us count larger and larger numbers. First numeral system we're going to talk about is the Roman numerals. So this is basically an improved tallying system. The characters that represent the different numbers, i for 1, v for 5, x for 10, l for 50, c for 100, and m for 1,000, these are positioned by value. So we can see that all of the i's are to the right, and then they grow larger in value as we move to the left. They did add a little twist on there with the subtractive notation. So that means that c that happens after the m really means to subtract 100 from the 1,000. But this number system has its drawbacks, which is, does anyone can say what number this is? It's going to be hard. You still have to do a lot of math in your mind in order to try and figure out what number is being represented. So the next numeral system, the Arabic numeral system, was much more successful in this numeral system. Each value in the base number, which in this case is base 10, gets a representational character. And on top of that, we developed positional notation, which means that the sequence of digits creates a number, and the position of that number within the broader group indicates the positional increase in the order of magnitude. Once we have positional notation, the concept of exponentiation quickly follows. Now that digits have a place, we can then count that place. So a number such as 100,045 can be represented as 1 times 10 to the 5th, plus 4 times 10 to the 1, plus 5 times 10 to the 0. There's another way to represent this, which is with the E notation, which I'm going to continue using for the rest of this talk, because it's shorter and doesn't require me to make superscripts, which are really irritating in keynote. So now we have our exponents. And clearly, these exponents themselves need some names. So we have generated different ways to name these exponents. Here is just a couple examples of ways that we have generated to name these numbers. So these number systems allow us to count much bigger and much smaller numbers much more easily and be able to share those numbers with others very quickly, with the fewest amount of words. We don't have to notch 1 million notches into a stick and try and share that with our friend. We can just say 1 million. It's much easier that way. But that comes to another lie that we tell ourselves, which is that naming these numbers means that we actually understand the value that is represented by them. We can easily say 1 billion or 1 trillion. But internally, do we have a good understanding of what does that number actually mean and what does that number actually represent? So for example, did Uber just get a valuation of $50 billion? Does anyone have any understanding of how many billions of dollars that is? That just seems insane. How much different is that from $1 billion? Don't really know. So now we have three truths. The first is that our number sense is limited to one, two, three, and four. That counting is actually kind of hard and takes mental energy. And that just because we've named numbers does not mean that we understand them. And so we can use this in our day-to-day lives as programmers. The first example that I'm gonna give is in the user interface UX space by limiting the way that we structure the display to our users to something that's much more easily accepted by our innate number sense, dividing things into three sections, not five or seven. This allows people to better understand and better grasp the information that you're displaying to them. Similarly, instead of displaying tables of numbers, so that you have to kind of parse and try and figure out what that means to the user, instead, what's a much better way to do that is to actually analyze those values for yourselves and to create representational displays, be it charts or graphs, in order to more easily tell people what it actually means for these numbers that you're trying to display to people. It also helps us understand some of the best practices that we talk about in programming. This urge to keep our methods short, to limit the number of lines of code within each method. This makes sense based on the fact that we're keying into our innate number sense and are being able to understand things at first glance, rather than having to bring in our counting brain and trying to analyze things a little bit more deeply. And this also makes sense when we go into the testing arena. Why do we try and limit the number of assertions that we place in a particular test? It's the same sort of concept. However, you can also go too far into this direction and try and make each method or each class too small. I think there is another balance to play there, which is that if we're trying to track the business logic of a particular piece of code across a number of different classes, we don't really want to be having to dig through 15 different files or 15 different classes to try and trace the logic. Being able to limit it to very particular spaces and being able to see it all in one screen helps us understand the code that we were writing. So in order to talk about what very big and very small numbers are, I'd first like to define a baseline for our shared human experience of numbers. And I'm gonna do this for both distance and time. So the baseline for distance I'm going to claim is about one meter. This is the scale of our own human bodies and the scale for what we can grasp in the world around us. So that means if one meter is our baseline, what is the very smallest thing that we can experience with our naked eye? And that is about e to the negative four or the width of a human hair. So that is the smallest thing that we can experience on a day to day scale. And what about the largest thing that we can see and touch and conceptualize? And I'm gonna claim that that's something on the scale of a mountain. We can see a mountain at a distance, but we can also climb up a mountain and experience it and at the size and scale for ourselves. And that is about e to the four. So for time, I'm gonna claim that our baseline is about one hour of experience. We divide up our lives into one hour chunks. And so that's a good baseline for time. So what's the smallest amount of time that we can ourselves experience? It's about a blink of an eye, which is about e to the negative four. And what's the longest thing that we experience? Well, that's going to be our own lifespan, right? So that's about e to the five. So that brings us to another lie that we like to tell ourselves, which is that we have direct experience with very small and very large numbers. In fact, our experience is limited to the width of a hair to a mountain or a blink of an eye to a lifespan. And in the grand scheme of things, this is in the scale of thousands to thousands. And so when we're talking about numbers, those are not very big numbers. And this can be replicated across different types of comparisons as well. So how did we go from living in the thousands to the thousands to larger scales? Humans for a long time have known that curved lenses and surfaces will magnify objects. The Nimbrand lens is one of the oldest lenses discovered. It dates to about 750 BC, and it was found in Assyria. There were similar lenses found in Egypt, Greece, and ancient Babylon. These were very crude devices, and while they were able to magnify to a certain extent, they were not super useful to humanity. The problems there being that the mathematics to control how light is being bent and refracted was not yet discovered. Optic theory is the study of such things, and mainly about how mirrors and lenses bend and focus light. The law of refraction is required to compute the shapes of lenses and mirrors that will focus the light at a particular point in on an axis. And really, the law of optics and the study therein picked up steam really around the 1500s. And this is maps to approximately when microscopes were first invented, about 1590, and telescopes followed shortly after 1608. And so only at this point in time was our world allowed to expand beyond what we can experience ourselves on a day-to-day basis. Due to these inventions and others, it has allowed humanity to drastically increase our knowledge about the world. So going back to our baseline of one meter, now that we have science to help us, what can we see now? So we can see things that are smaller. We can see bacteria at E to the negative six. We can create microprocessor memory cells, 14 nanometer resolution began shipping in 2014, that's E to the negative eight. We can produce gate lengths of five nanometers. This is the gate length of a 16 nanometer processor, which is insane, which is E to the negative nine, because the sides of atoms themselves are not much smaller at E to the negative 10. 153 picometers is the size of a radius of a silver atom. This is insane, right? And we can even determine the width of an electron, which is at E to the negative 15. These are massively tiny numbers compared to the width of the human hair, which is the negative E to the negative four. We can see things that are even bigger. We can go and investigate the moon at E to the six, the sun at E to the nine. There are even stars bigger than the sun, such as Rigel, a blue supergiant, and Betelgeuse, a red supergiant, both in the constellation of Orion. We can even see things such as the pillars of creation. This is even just a small subset of the picture of the Eagle nebula. These pillars are named such because the gas and dust inside of them are busy creating brand new stars. The leftmost pillar in this picture is four light years on length, which makes it about E to the 16. So it's about as big as compared to our one meter as an electron is small. So what about time? We can detect things that are even smaller than the blink of an eye. Synapse of what in our brain is about E to the negative seven. That's one millisecond. The processor cycle of an 8186 or 8188 processor in 1980 was five megahertz, which is E to the negative 10. And our current best processor cycles, 3.5 gigahertz is about E to the negative 13. And so we can discover things that happened much longer in scale as well. What's the organism that lives the longest? It's a bristle cone pine tree, which lives about 5,000 years, the oldest recorded specimen. Humanity itself has been around for 200,000 years. It's E to the nine. And dinosaurs lived for over 100 million years, which is E to the 12. So that brings us to another lie, which is that we've been able to explore the world in great detail for a long time. So it's not true. This is only the last couple hundred years of our 100,000 years of existence is going beyond the thousands, two thousandths. So our brains are also really good at estimating things, estimating the odds. And what is important here is that our brains are able to determine risk. We need to know whether or not we're going about to die so we can prevent that from happening. It's kind of a bad thing. And so really our ability to do the math on analyzing risk is really based on immediate danger. And that of course means that our brains have some problems in determining the differences between immediate risk and long-term risk, right? We are much more scared of a shark attack or a snake bite, even though while, yes, those are very dangerous things if they happen to us, but it happens to us very infrequently, as opposed to the longer-term risks of smoking cigarettes or driving around in cars. While they have tiny individual risks, they happen so often in their day-to-day lives that they can build up to be massive dangers to ourselves. So our brains are not good at calculating odds. So the lie that we tell ourselves, and we think of ourselves as very rational beings. We think of ourselves as being able to do this math and try and determine what's more likely to happen than something else. And in fact, that's not very true. And especially when we're dealing with numbers that are both very small and very large, our brains have a hard time determining what is most likely to happen. So we might say things to ourselves such as there is no way that a user will do a particular thing or no way that two users are going to be clicking that button at the exact same time, right? That's never going to happen, ever. Or no way could these two lines of code be executing at the same time, right? There's hundreds of thousands of lines of code in this code base. What's the chances of those two are going to be executing at the same time? Don't have to worry about that, right? Don't have to protect against that case. No, that's your brain trying to tell you that the risks of something happening is not very high. When in fact, when you're dealing with computers that can process at such high rates, things are happening so often that we can't even do that math in our brains anymore. The chance of these things occurring rise dramatically. So what we need to remind ourselves is that extremely large and small numbers are new experiences to our brains. And calculating odds of something occurring is actually really hard. So how many times do people see something like this, right? So this fellow, this is a German fellow, was in the Guinness Book of World Records for having the verifiably longest legal name up until they actually took this out of the Guinness Book of World Records. So this guy has 27 names. The first 26 start with each letter of the alphabet. And then his last name is something like two-thirds of this page. He goes by the name Hubert Wolfstern. Well, he did, he's dead now. But anyway, he did exist. So there are other examples of people who have similarly a multitude of names. And then there's also people with a single name, right? Cher legally has her name be Cher, no last name. And then there's also the royal families of both Japan and Indonesia traditionally have only a single name. And when you're building a web app, you don't really wanna be pissing off the emperor of Japan, right? If he wants to sign up for your site, you should let him. Then there's also the cases such as people with hyphenated names who are told that their names are invalid. People that have non-standard characters and their names, I think that's a lot more common here in Europe and us Americans try and pretend that doesn't exist. And I'm looking forward to the time when emoji and other unicorn characters become very common in our names. I mean, I want my kid's middle name to have, be just an emoji, that's gonna be great. Is he gonna be able to handle that input? Probably not. And there's also the case of email addresses too, similar problem. For those people who like to use the filtering or tagging mechanism of Gmail or you can add whatever you want after a plus sign, it can be very useful to be able to sign up for mailing lists and be able to ignore them if they decide to rogue spam you daily. There's a lot of sites that won't let you do this, say this is an invalid email address. And then of course with the explosion of domain names, how many old sites with particular email rejects that are trying to say that you can't have your fantastic, fancy domain. And that's totally not accurate. And those are just some examples. These are examples of valid email addresses as listed by Wikipedia. So, do you think you can create a rejects that will accept all of these? Don't even try, just send them an email. That's the way to do it, right? Send them an email, make them click a link. Then you know if it's right or not. Good, right? And that's just kind of the start. That's a lot of front-facing user interface stuff. But validating input is only one example. Adding protections for the explicit expectation that edge cases will be hit is really important. These are the cases where you need to use the correct usage of mutexes, database transactions, or background jobs to ensure that what you're expecting to happen actually happens. So that you can protect against the case when the two users hit that, I want to reserve this ticket button at the exact same time. Or to make sure that your financial business logic trying to make sure that all the things in a row for a financial workflow happens, that actually happens. So you don't get dangling shopping carts or people that get charged five times for trying to buy something on your site. That will make them really unhappy. So in reality, the human experience has, and their worldview experience has increased drastically in 400 years. For the vast majority of human existence, we've been living in the thousands, 2000s range, right? Blink of an eye, width of a hair, lifespan of a human, size of a mountain. And in the last couple hundred years, we have moved from that to millions and millions, to billions and billions, and to trillions and trillions. And to write scalable code, we must start by developing for that millionth user for the billionth request and for the trillions event. And in this world, the edge case is not the edge case, it's the certainty. And assuming that these things are not going to happen because you think it's a rare event is folly, and that's gonna be the problem that's going to cause you many sad nights later on in life. So that is what I would like to share with you. Keep in mind when you're trying to develop your code that your brain is not as smart as it thinks it is. And numbers are very big, and small numbers are very tiny. Thank you. Thank you.