 What's up, everybody? It's an honor to be closing out the day. I'm from Chicago, part of the Chicago Ruby community, so I probably don't know who I am. I've been a software developer for about 20 years. Started using Ruby via Rails in 2006. I went to school to study speech. Audiology and speech pathology. And after graduating, I realized that my degree prepared me to do absolutely nothing. So I was able to get an entry-level programming job at a nonprofit in my town. And one thing led to another, and I just kept programming. I figured that one day people would find out I don't really know any computer science, and then I would have to find another job. Now I've come to learn that it's kind of called imposter syndrome when you feel like you don't belong when you actually do. And it took about 10 years for me to kind of realize that actually I'm just as capable as someone else who might have a computer science degree. And then I actually had a very anti-computer science attitude. I became to the point in my career where I was hiring other people. And I would interview college graduates who had just come out of college with a CS degree. And I felt like they didn't really know anything in terms of actually being able to sit down and start doing programming. That's because I had a misconception in my head that I'm gonna explain here. So I just had this very strong anti-attitude toward computer science. Not a number of years ago, I guess I got to the point where I became interested, actually, where I would be reading things on the Rails talk mailing list or on the Ruby core list. And they'd be mentioning terms that I didn't really know what they were. And so I would start to look those up and I found myself getting into some computer science. I now teach in the master's program for computer science at the University of Chicago. Don't tell them I don't have a computer science degree. Is that camera on? Oh, yeah. But I just wanna tell you a quick story of how my foray into computer science has changed me as a developer and hopefully it can be inspiring to you as well. The title you might have already caught on is an inside joke to the famous Crockford book about JavaScript. This picture I think appears like every conference. It kinda makes the rounds. There's the JavaScript, the definitive guide on the left and then there's the good parts on the right. And you can tell by the relative thickness. Yeah, so what I'm gonna try to do is kinda do the same with computer science. I'm just gonna try to make you aware of some of the good parts just enough so that you guys can learn on your own. I'm not gonna go into great depth. I don't need to with this crowd. But I'm just taking a very beginner focused approach so you don't need to know anything about computer science hopefully to get something out of this. If you were to open up any computer science textbook you would see something about data structures. And this is usually where I close the book. So I can't think of anything more boring. But computer programming is arguably just a matter of data transformation. This is kind of a weird way to think about computer programming for, especially for someone like me. I'm an object oriented kind of thinker. And to think about data as having that primary role might seem strange. This is a photo of no one's garage. This is, hey, this was taken from an ad for the things you can do to organize your garage. But I guess the idea is that if you've got something to put away, you wanna put it away in such a way that you can easily go get it again. And depending on what you're trying to put away you might need different kinds of containers. And this is how I start to think about data structures. You know, we use data structures only because we need a place to put data when we're not using it right at that moment. Okay, so it would behoove us to be able to put it away in such a way that we can go get it again easily when we need to. So let's start with probably the most primary data structure that one learns in computer science. The linked list. All right, so this is my favorite linked list. And it's just, you know, as Ruby developers, we're accustomed to, you know, an array, okay? A linked list is not an array. It's an array with just the most primitive operations you can possibly think of to just barely be able to store a list of things. And in a linked list you have only the ability to go get the first thing in the list. From there you can only get the next thing that it's connected to. So each element is linked to the next. I kind of imagine it's like elephants going, you know, trunk to tail, trunk to tail. And if I once sat down to try to build a Ruby class that would give me the ability to only push things into the list, get only the first element and from that element get the next one. So everything I had to do, I couldn't just say give me element number five. I would have to start at the first element and walk the chain to get to that fifth element. That's a great exercise. If you were to space out from here to the end of the talk, if you were to just try to sit down tonight and write a Ruby class that had that minimal functionality, I think it would warp your brain a little bit. It did for me. The solution is generally very counterintuitive and there's lots of different solutions. So you'll hear the term linked list. It's just our entryway into a data structure. From there, you may have heard of this idea of a binary tree. Instead of having just one connection between elements, we can have two. And the cool thing about that is if you're a little bit smart about how you make the connections, you can actually pre-sort the data as you go. And basically the way it works is that, let's say we start with the number 60. Great, I've got the number 60 and someone then gives me 31. Instead of just attaching it, I've got two choices. And so I just use a rule that would be convenient. So in this case, if it's less than 60, I'll put it on the left. And if it's greater than, I'll put it on the right. So 31 would go on the left. Let's say the next number that comes at me is 80. That would go on the right. If the next number that came at me was 70, well, I would start at the top 60, 70 is greater than 60, I move to the right. Oh wait, there's an 80 there. Let me go to the left and so on. And that's how we start to populate this binary tree. And it actually makes traversing this tree now very easy to go get elements. It's kind of already in assorted order. Who would ever use such a thing? Well, actually, this kind of binary tree, if we stop to think about it for a minute, is used for lots of things. Decision trees, if you've worked on any sort of decision management system or business logic, business rule logic system. Object hierarchies, if you wanted to, like how does Ruby actually keep track of all the classes and subclasses that we write? If you get into compilers, if you're curious about implementing programming languages, abstract syntax trees use the same kind of approach. And you can do very important things with it, like a Star Wars family tree that someone put on the internet. I don't know why. If you break free from just having two connections per node, you get to what we call a graph. This confused me for years. I thought a graph was like a bar graph. So I'd be reading something and be like, oh, you just use a graph. I'm like, how do I use a bar graph? To make I don't understand. But this graph is just a generic word for nodes that are kind of connected. And you can have as many connections as you want. This actually turned out to be amazingly useful for modeling things like social networks, maps, plumbing systems, electrical grids, security systems, figuring out degrees of separation, air traffic control, if you want to somehow tilt this into a three-dimensional space where the nodes are moving, very fun. Figuring out the shortest path between things, neural networks, it's how we start getting into machine learning and artificial intelligence. So this graph structure, which a minute ago you probably said, oh yeah, that's pretty easy. Well, if you thought that was great, because that's actually your gateway drug into all of these other areas of computer science. So speaking of maps, so coming down from Chicago, actually I flew, I was thinking about driving, and so I punched it in and Google Maps instantly gives me these three options. How did it figure that out? How does that actually work? This kind of thing keeps me up at night. This is just a graph, but the edges, the connections between all these roads, have values, which is either distance or traffic, and using that we can figure out shortest path. That's also great in industry for things like lease cost. You can model them as a graph. By the way, why does Honolulu have an interstate highway? I don't, oh no, if anybody's here from Hawaii, please talk to me afterwards, I'll have to learn. This is a snapshot of our current transmission grid, electrical grid. This is an automated self-balancing network that we've been building. It's an amazing thing. Our lives now depend on this kind of technology. Does anybody happen to recognize this crazy photo? Who do you think? Awesome, this is a picture from the Curiosity rover that we landed on Mars years ago, and of course the first thing that we taught it to do once it landed is take a selfie, right? So there it is. The landing procedure, this is too small to read, just note that it's extremely complex. The way that we usually, we land things on Mars up until Curiosity was we would just wrap the thing in a big trampoline, throw it at Mars, and it would land and bounce and wherever it stopped, we're like, that's a good place to explore right there. So they came up, but this was a big rover, it was a cost a lot of money, they came up with this extremely complicated parachute, heat shield, crane, throw, just incredible system. That was, I heard about this only two days before it was gonna happen, I was like, this is never gonna work. Like I'm in computers and I know this is never gonna work. This system was completely under computer control. It's not like there's someone at NASA with a joystick moving around because one move of the joystick to transmit that takes about three minutes to get to Mars. So by the time you see the Martian monster, it's too late, right, to actually change it. And I don't think they thought it was gonna work either. And, but so much of our lives now are done automatically through computers without human intervention. So data structures though, you can't really do anything without talking about algorithms. And many of you know Charles Babbage and Ada Lovelace, kind of the first ones to really start thinking creatively. This is around, this is 1820, 1830, about how to get a machine to try to do something with a procedure. They were trying to do some simple mathematics at the time. And it eventually, just shortly after that, led to this event. This is actually my favorite photo of all time. This is Apollo 11 and here's Neil Armstrong coming down the ladder. He's about, do you know how long he thought about this moment? How many years he trained? How much work it took for the space program to get him to this point? He's about to actually be the step on the moon. But then I thought, wait a minute, if that's Neil Armstrong, who's taking the picture? There are aliens on the moon. No, so actually that's Buzz Aldrin. Neil Armstrong had already come down and he's got the camera. But this was the first time that we trusted computers with our lives. The way that this early computer we had to invent just for this. Imagine when they climbed back after they're done running around the moon for a while, if they were to climb back in this piece of tinfoil, push the button to lift off. If that button doesn't work, there was no rescue plan. There's no other way to go get them. So we had at least reached the point where we could trust computers. Nowadays, we don't give a second thought. If we have to go to the hospital, they have this amazing technology now and we just expect it to work. But this computer worked. Because a lot of people worked on it, but particularly this person. Many of you I'm sure already know the story of Margaret Hamilton, lead engineer for the Apollo Space Program. This is a famous photo where she's standing next to the printout. If you print out all the source code of that guidance computer. And she's the one that saved their lives. Actually, if you watch on YouTube, the original recording of Neil Armstrong landing, there were multiple alarms going off 60 seconds before he was trying to land. Her invention of like threading to have the computer do multiple things at once and then to put a priority idea on top of that allowed them to land and live. Instead of having the computer worry about an alarm that actually was not important at that moment. So her, she was really the first person to coin the word software engineer. She did a lot for us to talk about software quality and what it means to have a system that works. So from there I wanna talk about this notion of complexity. We know that for any given problem there are an infinite number of good solutions. How do we compare one against the other? Not all implementations are created equal. And there are generally two ways that we can compare. This piece of code versus this other piece of code. And that's to look at how it uses time and how it uses space. You may remember Einstein talked a lot about time and space and how they're connected. Most people don't know his 1907 paper saying that time is money, which was genius. So the way that we kind of in computer science give a notion of complexity is what we call big O notation. And I did not understand this for a very long time. And let me just see if I can give a couple examples. See this capital O of N is weird looking thing. This is how we kind of describe the complexity of the lame implementation that I've got right there in Ruby. I kind of, of course, is not really idiomatic Ruby. I just want to use it for demonstration purposes. But here's a method exists. Does this name to find exist in this list of names? I go through each one. I return true if I find it, so I stop the loop early. Otherwise our return falls. Now, the reason that this is O of N, this means that as your input size gets bigger, in the worst case, the time it takes for this method to do its work will vary directly linearly with that input size. So if I have 100 items and it takes however many seconds on your computer, if you have 200 items, we know it will take twice as long in the worst case. If you have 1,000 items, we know it'll take 10 times as long. And that's just good to know so that you can be aware when you're writing your methods how it's gonna do when you throw a lot of data at it. And so you can graph it. It's just a line, it's where you say O of N. N is the input size. How many things are you trying to battle against in your algorithm? Now here's another implementation of that same method. Still returns true or false. It's just what we call a binary search algorithm. Don't worry about the details too much here. But basically, it's just like you would look up a name in a phone book. I don't know if anybody knows what a phone book is anymore. I'm old. So if you told me to look at a name in a phone book and I, for Cincinnati, I've never been to Cincinnati, I would basically just open up in the middle, right? See where I am and then I know that the phone book is sorted so I can then go half and half again until I get close and then I'm there. Same idea, I grabbed the midpoint. I look and see if I got lucky. If so, I'm done. If not, I say, okay, is my name gonna be before that or after that. And I just recursively call that same function again now using a subset. This lets me zero in on a name in four or five tries, even in a phone book that's got millions of names. And so this performs very well against large data sets. So what we call logarithmic complexity. And so we would say O of log N. And one way to think about logarithms is if it's just the number of digits in your number. If you have 1,000 items versus a million, it'll only take twice as long, not 1,000 times more. So you can barely see that green line along the bottom. So that's really good. You wanna use methods that have logarithmic complexity if you have a choice. Anybody use the Atom Editor like I do? Okay, about a year ago I wanna say they did a blog post on one of the things they were doing to increase the speed of their editor. And they talked about how one of their breakthroughs was to use this divide and conquer type of algorithm. They used the big O notation in there. And I was actually able to understand what they were talking about. And so hopefully if you come across this type of thing to them, it'll be easier for you as well. Finally, here's a counter example. Let's say I've got an array and I wanna find all possible combinations of that array and make a bigger array. Here I'm just gonna map those items. But I'm gonna, for each item, I need to go through all the items also in order to make all the possible combinations. This has what we call N squared complexity. This is bad. Do not do this. In fact, the tell-tale sign is if you ever see that you have nested loops, you should worry very badly. So as the number of items increases only a little bit, the time it's gonna take you is the square of that. And so very rapidly, it gets out of control. There are other big O notations, but hopefully now, if you weren't familiar with it before, or weren't comfortable with it before, you can now start to look those things up and go from there. All right, so finally, let's talk about the future a little bit. Sort of what's next in computer science? And that's why I'm so excited to be now a part of this. Let me go back to World War II. So who can name the British mathematician that helped break the German and England machine and helped end the war? Exactly, this guy, right? So, all right, so he, you know, we sort of know him, oh, he did the code, if you watch the movie, oh, the code breaking. Actually, he did a lot more than that. After the war was over, he did a lot of deep thinking about the role of computers in society. Ideas that made no sense to his peers. He was worried about what if machines start to run our lives? And I think his peers were like, dude, it was just running a thing against the code. Don't worry, it's the size of a room, don't worry about it. But he was already thinking about what does this mean? What if machines can, can they learn to think like our brains can think? And if so, what would happen? And that's in the news today, right? Elon Musk is often talking about, should we worry about artificial intelligence? But the main ideas that I learned from studying him was that computer science is not computer programming. I thought it was. It's not. It's really two different things. I was able to do computer programming just fine, knowing very little computer science. Because the two things, yeah, they're related, one can empower the other. But let's not mistake one for the other. I believe computer science, it's more about a way of thinking. Some people call this computational thinking. And it means, do you know how to look for cause and effect? Do you use logic and experiment and empirical results? That's part of computational thinking. Breaking things down into small pieces. Breaking a problem down into small pieces. Super critical. I specialize in working with non-programmers who are just beginning to learn programming. And these elements are usually what's hardest for new programmers to learn. It is second nature to the rest of us. Be able to do thought experiments like Turing did about artificial intelligence and machine learning. But also, the inspiring thing about Alan Turing was he focused on what was important. When he found himself in a bureaucracy that was more concerned with minutiae than the important things of the day, he would get out of that situation. He just wouldn't, he couldn't stand it. And it reminds me of this other hero in our history of computer science, Grace Hopper. She was also of the same era. She became the first female admiral. She basically invented COBOL. She was the first one to give us the notion of a compiler and this notion of an indirection between the code that you write and the actual machine language code. Maybe from there we get the adage that there's no problem that another level of indirection can't solve. She tells a story of how generals would come to her and say, this would be like in the 60s. And they would complain and they would say, I'm trying to get this communication from ship to shore and it takes so long. It's going up to a satellite and down at the speed of light. Why is it taking so long? And she would look at them and she would take out a piece of wire that is this long. I have a dozen of them up here if you want to come up later and grab one, I call this the Grace Hopper wire length. She would ask them, do you know how far light travels in one nanosecond? And she would hold up this wire. It's 11 and five eighths inches, almost a foot. She said, so when you're trying to send something from ship to shore, up to the satellite and down, like give her brother a break. Like it's gonna take the light a little while to be able to do that. And she also is kind of famous for recording, back in the beginning, this is too small to see, but relay number 70, panel F, parenthesis Moth, found in the relay, this was the first bug. They had suspected that insects were getting into the circuitry and that was causing problems but now she finally approved it. But she didn't just work on the technical side. She was inspiring because she said, humans are allergic to change. I don't know if you work somewhere like this. They love to say, we've always done it that way. Well, I try to fight that. It reminds me of Turing. She had a clock on her wall in her office that ran counterclockwise and people would be like, you know, your clock, she'd be like, why does it have to run this way? Is there a technical reason why? No, we only do that for tradition, but challenging assumptions, such an important part of computer science. We need people to push the envelope. Back in the 70s, famous rock musician Frank Zappa said, without deviation from the norm, progress isn't possible and I think that's true. She said, you know, a ship in port is safe but that's not what ships are for. Sail out to sea and do new things. Many of us here are entrepreneurs, others of us are not, but we're all trying to do new things. Now the sea is a scary place. But if you've ever actually sailed out to sea, it's a scary place. We're fearful of becoming isolated or falling overboard, but I'm perhaps most grateful for having encountered computer science for the following reason, that I am not alone. We are not alone. Hopefully as you've seen, we are standing on the shoulders of the work of all these other people and computer science isn't really about data structures and algorithms. It's about joining a long tradition of using computer programming to advance the world. Life is short. To many of my friends and family, I have these amazing superpowers that I can make apps. So what am I gonna use it for? I would submit to you to do something important. It's something that's important to you or your family or your community. You know, Matt's has said that perhaps the biggest invention to come from the Ruby language is the Ruby community. But we need to help each other. I mean, like it or not, we depend on each other. Just look at our gem files. I cannot pay my mortgage without the work that many of you do. So instead of reinventing the wheel next time, contribute to someone's existing open source project or help mentor someone else learn something you already know. If we make the tide rise just a little bit, all of our boats will be lifted up. Thank you very much.