 Yeah, so I was playing a little Taco Cat. Anyone familiar with Taco Cat there? No? Oh my God, it's criminal. They're kind of like a combination. They're like an English version of Shonen Knife, which is a Japanese version of the Ramones. They're big in Seattle and they're starting to tour, so they're my new favorite band this year. So yeah, as PJ said, my name is Carrie Miller. I'm based out of Seattle. I am mostly a back-end Ruby and closure developer for Living Social. What are we doing at Living Social? You win. Come see me later for a sticker. Before that though, I was a teacher at Ada Developers Academy. Has anyone heard of Ada Developers Academy? I see three people. I hear eight. Okay, that's great. No one here likes public participation. They're a 12 month boot camp, essentially. It's a six month classroom full time followed by a six month internship for students. And it's aimed primarily at women who are transitioning into a programming career. And what do people pay for this awesome, awesome amount of program and support? Zero dollars. In fact, we actually pay the students $1,000 a month as long as they're in the program. So that's something I'm particularly proud of. But while I was there last winter, I was trying to think of things to teach and someone said concurrency. I said, okay, well let me think about concurrency. So I started thinking about it. You know, and I watched a lot of conference talks and I read a lot of blogs and this is kind of my reaction. But I thought, you know, I can do this. I can do this. Anyone can do this, right? Anyone can fit a pony into their talk. It's easy. So if you go out and you watch conference talks about concurrency and specifically how we do concurrency in Ruby, you end up kind of like this after a while. And every conference talk you see with people talking about concurrency, they look a little mad. They look a little insane. And they're using a lot of words and terminology and concepts that if you haven't explored concurrency or don't have a CS background, you might not be familiar with. So that's kind of what I'm gonna talk about today. Pretty much what people always say about concurrency in Ruby are things like this. You know, and I'm just a simple cave woman. So like, what does all this mean? What is concurrency? That's kind of like the basic question that a lot of people have at the beginning. Does anyone wanna take a stab at it? Do we multiple things at once? Yeah, that's not a bad idea. Actually, how many people here are CS majors? Curiosity, oh, it's about half actually. Okay, that's about right for a Ruby audience. We seem to be either CS majors or crazy weird artists, you know, musicians and things, or both actually, a number of us there too. So I did go to art school, but in an alternate timeline, I'm actually a chef to cuisine because I went to cooking school for three whole weeks. Obviously, I'm qualified. One of my classmates was Alton Brown, so you know that I'm kind of a big deal. So as I was reading, I decided to go back to school a little bit. And I got some computer textbooks and all of the concurrency examples start with, imagine an infinite supply of spaghetti. And they present you with this, which is the dining philosophers problem. Does this look familiar to anyone? Couple people, yeah. And this is kind of how a lot of times CS programs start describing concurrency. It's this idea of you have all these philosophers and there's five of them and they need two forks each to eat spaghetti, but they can only grab one fork and then like what do you do with the philosophers with no forks or one fork? And it's really an example to sort of model what happens when you get into a resource constraint environment, you start having deadlocks and threads can't complete because they can't get resources and they can't get resources because the other ones can't complete. And so it's this horrible, horrible thing, but that's not really concurrency in a way. That's just describing a model of handling processes. So I dug back into, I said, you know, this kitchen thing sounds good. So I thought about Thanksgiving as I often do, being from New England and being love those candy DMS, you know, but all of these dishes are generally served on the same table, right? We've got to get it onto the table by 4.30 because there's a game on at 5.30 and grandma's gonna have to go home at six and someone's gonna get drunk if we don't get food into them soon. That's typical American family Thanksgiving, at least in my house. Am I the weird one? Okay. Well, tricks on you I am. But the problem with most typical American homes, right? Like I don't know about y'all, but I have one oven and a turkey cooks at about 300 degrees for four hours and roast vegetables is 400 degrees for 45 minutes. So how do I handle that? How do I handle that? And so cooking Thanksgiving dinner is very, very much of a parallel sort of idea to a concurrency problem where we have conflicting needs and those things are all battling for limited resources and they all need to, excuse me, they often need to interact with each other. For example, if you need the pan drippings from the turkey to make gravy or you need that pot again because you really only have one roasting pan, so you have to organize these sorts of things. And it's a lot of planning. But this is just one day out of the year. What do these people do? When we put in an order at a restaurant, it doesn't get processed in the order that we give it, right? There's not one chef gets our ticket and it makes each item as we gave it to the waitress, right? They have different people with different jobs. You have the garde-manger who handles salads and vegetables, a saucier who's responsible for sauteing and putting a hollandaise sauce on things. People with very specialized jobs and very specialized roles. And that's how they manage to get everything onto the table at the same time, looking relatively well cooked and nice, usually hiding the bad bits in the bottom. And it's a lot like how CPUs are built today because we're actually at the threshold in many ways of the actual physics of computer chip technology. We're at the point where we're making chips so small that quantum effects are starting to impact the ability for a processor to do things. The amount of heat generated by electrons going through all the little tiny wires and processors are trying to interfere with other ones. And so we have to do other things. So we start building chips that are able to analyze our code and process it out of order and break it apart because it's cheaper to go down the stack further to do something before you're finished with something further high up on the stack. It's very weird voodoo area for just a simple rubious like myself. So if you can imagine the CPU has a little tiny kitchen, it has a lots of these little chefs that are running back and forth and by using some of the tools of concurrency we can start to impact what they do and assign them different tasks to process. Now somebody said that concurrency is doing lots of things at once. Well, I'm doing a lot of things at once right now, right? I'm trying to talk into a microphone, I'm trying to stay within this puddle of light that I have up here, I'm trying to digest very quickly the noodle lunch that I had. And I'm also trying to give a talk. So really it's not really about doing a lot of things at once although in a way it is, it's the coordination of doing things at once. Things that need to interact with each other that are related to each other. For example, sending five groups off in different directions to go to lunch and expecting them all to come back at the same time. That's kind of a concurrency problem, right? Because I don't really know how long each one's gonna take but I kind of expect they'll take a certain amount of time but I can't predict it but I need to coordinate it to get them back. So really concurrency is about dealing with lots of things at once. Parallelism is simply doing a lot of things at once. And as you look at concurrency, this is one of those things that comes up an awful lot. People keep saying this kind of idea but it doesn't really tell you what actually is going on in there. So how does Ruby kind of solve this problem? How does it take our programs and allow us to split tasks up and make things happen multiple ways on a chip or set of chips and then kind of all come back into a glorious finish at the end? Well, we have to kind of start with understanding the process model. So a Ruby program runs the side of a single operating system process, generally speaking. And inside of that process, Ruby can execute a thread or another thread. So a thread is essentially a little tiny mini process that's running inside of the same memory space. So it has access to the same variables and the same little bits of RAM that have been assigned to the process under which they're running. Each process then has sort of this space of memory that they have to kind of manage each other. And in an ideal world, we let the CPU figure out what to do from there. So the trick is that way back, as far back as MRI 187, we have this idea of green threads. So these threads are all sort of kind of look like operating system threads, multiple things happening in the same spot of memory. But Ruby kind of figures out the order to do things. Unfortunately then, we're only able to really process what's going on in one of these threads at a single time because of this thing called the global interpreter lock, which I like to imagine looks like this. It's got my program tied to the railroad tracks. It's got my code tied to an application. So when it comes to concurrency in Ruby, the GIL is kind of the villain of the story in a lot of ways. It's this locking mechanism with very good intentions. What it does is it protects your data. It allows data to be modified by one and only one thread at any given time. And it prevents, this is meant to prevent corruption or leakage between these threads because remember they're all sharing memory, right? So they can kind of interact with each other. They can break things like log demeter, right? And like reach over and screw around or they can share the same variable that's gonna be changing out from under them. And the GIL kind of helps protect us from those sorts of things. And in 187, this wasn't really a huge problem under MRI. Since the Ruby interpreter itself is only assigned a single thread in the main process to the computer itself, the work couldn't get spread to other cores. It was fine, so you didn't really need to, you can only execute one set of instructions at a time anyway. This is a little tiny Ruby script that I wrote. It doesn't really do anything, it's just loops endlessly. But if you watch the output of a process list, you can see that there's only a single process and there's no subsidiary threads. You all can't see that at all, can you? Basically, I've got a couple Z-shells here and then I've got one single Ruby executable that's running here and it's running at 100% of my CPU. It's taking up as much resources as it possibly can and that's it. That's all that's running on my machine. When we got to 193, the MRI was suddenly able to have multiple threads out to the CPU, to the kernel here. So Ruby could access multiple cores. It could suddenly let the CPU do all the scheduling to say when a thread needed to execute versus another. And that's really cool, right? Because now we could actually use threads and have concurrency except there's still the global interpreter lock. So we've got multiple threads and kernel threads now, but we still have the guild to get around and that's true in MRI 2.0 and 2.1 and probably 2.2. So here's the same exact script running under 193. I still have a couple Z-shell terminals open but here's my Ruby process, but now I've got some threads running and these three right here, excuse me, these are the three threads that are running that I set up in the original script. And the amount of processor that they're taking up here is about 100%, they're sharing the resource. However, this column right to the right of those numbers, the stat column, S is for sleeping and R is for running, essentially. And only one of those is running. So they're frantically switching back and forth. So it thinks it's using all the processor but it's not, it's switching between them. Now not every Ruby interpreter has this sort of limitation. Rubinius doesn't, MacRuby doesn't, JRuby certainly doesn't, JRuby is really the most popular one that people who want to have multiple threads and have the Ruby program doing multiple things at the same time are running on. This is the same exact script running under JRuby which runs on the JVM, so it's not reaching down into the C libraries. Here's that same exact Ruby script and a whole bunch of Java threads because Java, that's efficient. But down here at the bottom you can still see here's our three threads that I spun up and you can't see it, but I can. They total about 180%, which means that it's splitting it across more than one CPU generally and their status, all three of them is R. So they're all running in parallel. So in this case, under JVM, I've got three threads that are all running simultaneously counting to 10 forever. You still can't see it. So a lot of people had this kind of reaction, right? Why do we have the GIL? Why is it there? It's preventing us from taking full advantage of these massively-cored machines we have, right? Like I bought a Mac Pro. I've got two 16-core chips in there. Why can I only use one of them? Why did I pay for all that extra? Matt's owes me a computer. Actually, probably two by now, just the amount of time I've spent working on this stuff. But the GIL really isn't bad. It protects us from shooting ourselves in the foot because Ruby's concurrency tools are a little bit primitive and they're hard to manage. And so the GIL protects us. But really what the GIL is there is it's protecting Ruby because MRI Ruby being written on C, not all the C libraries it depends upon are thread safe, which means that the C itself in trying to execute under these different threads could be breaking things that we wouldn't even know that the problem wouldn't be inside of Ruby but inside of the interpreter itself. And so the GIL keeps us safe. So here's another little chunk of Ruby code. It's just benchmarking what happens if we sleep for one second twice versus creating two new threads, each of them sleeping for one second. Anyone wanna guess what happens? You've seen a conference talk before. Exactly. So the first one runs and it sleeps twice for one second, one second, loop one second, and it takes two seconds. The version with the threads runs and it takes about a second to run those two threads simultaneously. And this is about what we'd expect because while we're sleeping for a second, the interpreter says, sweet, I don't have to do any work here. I can go let the other thread do something. And so they end up taking about a second. However, if we actually try to do this where we multiply one times one 10,000 times or we have two threads that do it each 5,000 times, we actually run into this problem where our first solution takes six 10,000ths of a second. The threaded version takes seven 10,000ths of a second. So it's actually slower. It's actually slower here to do the threading. And that's because the threading has to switch back and forth between these two threads and do scheduling and actually do all those calculations. And so this is kind of one of those surprising moments where people say, oh, well, our Ruby app is slow. Man, Rails is really slow. I'm gonna throw concurrency in. And then it slows down, it slows way down. I had a gem that does poker hand calculation. It takes about five minutes to run through 1.3 trillion hand combinations. PJ wasn't kidding. Don't play poker with me. And I thought, wow, five minutes. Well, I could write a C extension, but I hate myself or don't hate myself. So I'm gonna put some concurrency on it and do all these calculations next to each other. And then it took seven minutes. So that was me screwing up. Here's another little piece of code. It spins up 10 threads that multiply and they increment this sum value each time. And whenever the value is divisible by 250,000, it prints it out. At the end, it should print out about a million. That's kind of what we'd expect, right? But you've seen the conference talk. So you know, in fact, it doesn't. It doesn't actually. So what's going on here? Why are these threads unable to perform their functions? Well, the problem is, in this case, we have this print statement here. And just like the sleep, when the interpreter hit it and said, great, I'm not doing anything, I can let another thread do some work while I'm waiting for that sleep to end. Here it's saying, sweet, we're printing something. It's going out to standard out. I can let another thread take over and do something. And so it gets up to 249,499, and then it tries to print it and it stops. And the next one comes in and starts at zero and it runs up and so on and so on. And so we get very, very weird behavior. It's not what we expect. And mostly it's because we're doing a print statement in there, but really it's because what we're doing is we're sharing state. These threads are communicating with each other through this variable that's scoped outside of the threads. They're coming down here and they're setting this value of some until they keep running over each other because they've got one mailbox and they keep shoving different values in there and screwing each other up. But this is actually a really super easy problem to solve. We can just add a mutex. Anyone not know what a mutex is? So everyone else knows what a mutex is? I don't know. If you raise your hand it's pretty cool. So basically the idea is a mutex is like a hall pass for a variable, right? It's like only one person can leave this classroom at a time. And so we create a mutex and then we say inside the thread that we're gonna synchronize and it's like, think of like a transaction, right? Like this is all going to complete and nobody else can grab any resources from this thread until this thread is finished. That's cool. It's sometimes you'll see them called monitors or semaphores, there's a few different models, but this is basically what it's doing. It's the key to the bathroom. And so we throw that in and now our code runs perfectly fine if you could see it. So that's kind of the state of kind of doing basic concurrency in Ruby in that you can create new threads and tell them to do things and sometimes they screw up. We shoot ourselves in the foot and so we have to build these very elaborate structures around our code in order to keep them from blowing up, from returning weird things, from saying banana when we say what kind of car do you drive? It's very, very weird world sometimes. And so if you wanna do real concurrency and still write Ruby, you gotta switch to one of those other languages like Rubinius or MacRuby or JRuby. Pretty much JRuby is the only one I've ever seen in production. So has anyone ever done a Rubinius project? MacRuby? One person, okay. So I don't like that, right? Like who likes that? Who wants to build everything for yourself? All these other languages have things built into them that handle concurrency problems for us and we don't have to deal with the wiring of everything. It's as if I bought a new car and it came in a kit and I had to rebuild it from scratch every time or you know, I had to put gas in it every day. That would be horrible. Who wants that? But again, threads in mutexes and fibers in Ruby, that's not really concurrency. That's just the story. That's the building blocks, the words that Ruby uses and gives us to express what we want. And concurrency isn't about that. It's about doing things, many things and coordinating them. So threads are simply a means to an end. It's important to understand about threads in mutexes if this is something you wanna get into but it's not the end of things. It's merely the beginning. Cause when you crack the hood on an actor or a supervisor or a reactor pattern, you find threads, you find mutexes but why do we have to build it for ourselves? I can keep going on about this point cause it really, really annoys me because Ruby's about happiness and I wanna be happy and I love Ruby. It's what I do all day long and it makes me so friggin' happy and then I gotta do this. I have to build it myself and it's horrible. And Matt knows it. He said like, you know, if he had it to do over, like rip it out, put in actors or some other advanced features that let you work with concurrency because the stuff's really kind of primitive. So what are these other things that other languages have to give us? What could we go learn about concurrency by picking up another language and playing with it? A lot of them use something called the actor model which is kind of the Erlang and Scala plan. An actor is very much like a thread, you give it a task to do except it doesn't share memory state with the other threads or the other actors. In fact, those other actors communicate with each other not by sharing state but they share a state by communicating, by sending each other messages. This is what I'm doing. What are you doing? Hey, I hit this point. Hey, I hit this point. This is my result. And that's really much more of an OO solution, right? We have objects that can send messages to each other instead of us having to like reach across and see what's going on in your guts. The reactor pattern which is super popular in Node.js for better or worse, it's surprisingly simple. It kind of, it starts like this. What's better, Ember or Angular? So Vim, Emacs. I'm trying to because, man, lunch. I wanted to fight someone at lunch. Slowest service ever. Instead of waiting for our actors or our threads to go figure out which is better, Ember or Angular, I say which is better. If you like Ember, you should go read these three blogs. If you like Angular, go get some work done. And now you know where my bias lies. So I don't have to sit there. I don't have to wait for this response, right? It's like, I can just do it and walk away. Unfortunately, anyone see the problem with this? And the reason why Node is kind of doing some soul searching and like, oh my God, what have I signed up for? Well, now we have to do all the upfront work of defining for our threads. Do this thing and then do these two other things depending on that. And then those things could have their own, their own callbacks and results. And so it becomes, it can really spiral into a really nasty world really fast. And if something blows up way down the stack in one of those branches, what are you gonna do? So there's a lot of work you have to do to like recover and think about it. But it's still pretty popular. It gets used in a lot of places. One kept at a very simple level. In that way, it's a little bit like, excuse me, inheritance in Ruby or in Rails. We don't use it all that often, you know, but when we do, it's because we know we need it. We don't just, we don't go six or seven stacks deep and we're not Java developers, come on. We chose to be happy. So this is really the biggest problem with actually trying to use a reactor pattern is it kind of forces you to change how you write code. And you can use these in Ruby, but it's a total different way of thinking about your code and structuring it. Golang, is anyone, any gophers? I'm kind of a mini-gopher. Brian, I know you use Go. Yeah, I love Go. It is so much fun. I can't decide whether I wanna be a Go programmer next or a closure. I won't start a fight about that, but I would if anyone has played with those. I'd love to hear your opinions. I'm really enjoying Go. Go has this idea of channels, which is kind of like an actor, right? You say, hey, I've got some work I want you to do, you three to do, and I want you to all do it at the same time and be coordinated about it and then come back to me when you're done. Except, imagine your house is on fire. I mean, not right now, don't leave. Like, don't panic, but imagine it was. Would you call Sally, John, and Bob down at the fire station on their personal cell phones and tell them to come to your house? No, you call 911. You call a channel. You don't call the actors that are listening to that channel. And so one of the interesting things about channels is it kind of becomes a pub subsystem for these concurrent things that are happening inside your system. And later on, you can expand those very easily into service calls or full-fledged applications on their own. So it's really an interesting slight change to thinking about instead of having an actor do something, you have a channel do it, and you kind of don't really need to worry about what's going on underneath there. You're sending a message instead of telling something to do something. That's very oh-oh. So it's been about 28 minutes, and I talked about three patterns and a whole bunch about the gill and I hope some of that stuck. And I hope that you don't feel like this. I feel like this a lot when I'm thinking about concurrency when I'm trying to use it because it can be a little bit crazy. Concurrency can melt your brain because it behaves a little bit differently than you expect, especially when you're Rubyist, you just kind of get used to not being surprised, right? I've got this like text file you'd ever note that I keep of all, every time Ruby surprises me with something. And someday I'm gonna put together like a top 10 weird things about Ruby that Matt doesn't want you to know about. It's gonna be the best talk ever. It's gonna be so good. So many ponies. But concurrency is a place where this happens, right? Like the program's doing exactly what you told it to, right? Computers are dumb that way, except now we've given the computer a little bit of autonomy to make some decisions about what happens when. And we don't realize how biased we are to a worldview where line 12 happens, then line 13. Well, then we go down and do some code down here in a method, but then we come back up and we're on line 14. It really biases how we think. And so we just don't have the mental models necessarily until you start playing with it to see how these things are interacting with each other across time. It's very weird. One of the reasons that we don't really deal with it is that like, hey, like 90% of us are writing Rails and Sinatra or other framework applications for the web, right? And a short-lived Rails process, like whatever, adding concurrency is not the lowest hanging fruit. You should be fixing 80 other things, right? You should be fixing active record problems and garbage collection and a whole host of problems that your app has well before you start thinking about adding concurrency. So it's really easy to fall in the trap of thinking, yeah, I read a blog post about concurrency and I'm gonna add it to my app and it's gonna be great. No, it's not because time and time and time again, right? Everyone who's worked on the project more than a month has seen this, right? We overestimate the benefit of what we're doing and we underestimate the cost every time. And it doesn't matter what numbers we say, it almost always happens. It's really easy to feel like a pony, especially when I'm wearing my rainbow-onesy. Oh yeah, high cosplay. But feeling like this is never an excuse. You should go write some code and understand these things because there are other languages that are knocking on the door that are chasing at the heels of Ruby. And Ruby's gonna change. Ruby is a wonderfully dynamic language with a really vibrant core team that responds to problems and extending it and thinking about problems. And these sorts of things are going to come in to play at some point. And so it behooves you to try other languages that stretch your thinking about computer science that introduce you to new problems and new areas for thought and exploration. And it's just go write some code. It's not hard, it's just code. It's what Brandon did this morning. Find somebody who knows this stuff and just ask them a question and get started because nine times out of 10, they're just waiting for you to ask them if you're actually interested in this sort of stuff. Here are four or five things to look at. I'm a big fan of the seven patterns and seven weeks books. Anyone else like those? Anyone hate them, actually? Anyone know what I'm talking about? Okay, like eight people. It's basically like here's a chapter, read it this week. Here's another chapter on a different topic. Read it this week, they're really cool. There's a concurrent Ruby project that has like 60 something different advanced concurrency patterns that they're pulling in from other languages and writing them in thread safe Ruby. It's really interesting project to sort of see, implemented in Ruby what other languages are doing. It helps you understand it. And then of course, Rust for Rubyists by Steve Klamnik has a lot on concurrency and Rust is doing some really interesting stuff in that realm too. I'm well out of time. I wanna thank everybody. A little discombobulated from lunch being kind of weird but thanks for sticking around. Thank you, Terry.