 I hope everybody had a good lunch and you're letting it settle, but the danger of talking right after lunch is you get that kind of, it's time to kind of kick back and relax and your mind drifts a little bit. So we're going to try to fight that today because we got a really interesting talk here about threads. How many people here, raise your hand if you've done significant programming in a multi-threaded environment. Okay, good, good we got, so you're going to know a lot of this anyways. How many people have never done any multi-threaded type programming? Okay, it's going to be interesting. My name's Jim Warrick, I'm the Chief Scientist for Edgecase in Columbus, Ohio. I myself am out of Cincinnati. You can follow me at Jim Warrick on Twitter if you're interested. And I'm here to talk to you about a problem. In 2005, Herb Sutter, who was a brilliant guy, he was the editor of the C++ magazine, really sharp fellow, wrote an article saying that the free lunch is over. And he's talking about something called Moore's Law. Now if you're familiar with Moore's Law, Moore's Law states that the number of transistors on a chip doubles every 18 months to two years, somewhere in that time frame. So it's an exponential growth, and we have been reaping the benefits of this exponential growth in technology over the past few years. This has been going on since about the 1960s, late 50s, 60s. So if you plot these things, the green dots on the graph represent the number of transistors on a chip over time. And as you see, it's a fairly linear type of curve, linear in an exponential scale that's going to go up. And there's really no sign that that's going to stop anytime soon, at least not within the next five, 10 years. We don't expect that to change really. However, when you look at something else, the clock speed of these computer chips and plot that, something very interesting happens right there in 2003. The line is more or less linear going up to that point, but all of a sudden, it flattens out in 2003. What has happened here? Let's illustrate this. Well, let's first talk about things that you do to speed up your computer, right? You can optimize the instruction flow in the CPU. A lot of modern CPUs will pipeline their instruction set so that you're fetching the data for the next instruction. As you're executing this instruction, I can do multiple things at one time, and you can really optimize that execution process. That's one thing. You can cache results in a local cache that you can look up the variables very quickly as opposed to looking them up on slower memory. And there's various levels of caching that you can do, and that's all good. And that'll speed up your process if you get good cache hits. But the big thing that has been increasing over the years is not due to either execution optimization or caching, but it's been due to the clock speed. The clock speed has been increasing, more or less, in sync with Moore's law. And that's because pieces get closer together. The transistors get smaller, they are closer, and you can bump up the frequency and not worry about the propagation delays between the individual items. So clock speed has been a really good thing. However, let's look at this. This is off of my, not this math book, but the one I had right before this. And I got it about two and a half, three years ago, and the model was probably designed about three years ago. So it's a three-year-old design, and it was running at 2 gigahertz. So according to Moore's law, a three-year-old design that's two 18-month time spans, it should have doubled twice in the time that this machine was designed. Okay, so two doubled once would be four, double again would be eight. So we should be running on eight megahertz laptops right now today. Raise your hand if you've got an eight gigahertz. I remember when one megahertz was hit, and that was a really big, never mind, that shows how old I am. How many people have an eight gigahertz laptop? How many people have an eight gigahertz machine at home, maybe a desktop? What happened? This is the new math book prose as of several months ago, before the brand new ones came out. And if you look at their specs, they're at 2.6 gigahertz. There's nowhere near eight, better than two. How about the brand new ones? It just came out. Everybody's really excited about these machines are fast, right? Well, what's their clock speed? Standard is two and a half, with an optional 2.8. Where's my eight gigahertz laptop? Well, maybe all the speed is going into desktop machines. How about the Mac Pro? Okay, this has got to be a fast machine, and let's see. A fastest Mac ever, up to two times faster than the previous one, and it's running 3.2 gigahertz. We haven't even doubled the speed yet in three years. So we're having problems getting these faster machines. Clock speed isn't doing it for us. So we look in the future, and we look at possibilities for making faster and faster machines, because we're so used to Moore's Law, and that automatic clock speed helping us out, what are we going to do? Hyperthreading is one possibility. Hyperthreading is where you have a single CPU, and somehow magically it acts like two CPUs to the operating system. Man, it's real technical and kind of involved in the operating system. Kind of has to be able to handle symmetric multi-processing. But if you're doing hyperthreading, theoretically we'll get you between 15 and 30% speed increase. Not bad, but it's not double yet either. And truthfully, when they actually measure hyperthreading, depending upon the actual architecture, it might slow down some applications. So hyperthreading, although good, is not the ultimate answer as well. Caching is still a good option, keeping things local and getting more cash, that's going to continue to help, we believe. But the other big thing for speed increases is multi-core systems. That's where you have more than one CPU in the same machine doing the work, sharing the load. So if we go back to the Big Mac Pro, it has an eight-core system. That means it's eight CPUs running on the system, all sharing loads. So you're going to have eight things going on entirely separately on the eight CPU cores. Cool. So let's go back to Herb Sutter. What does he say? Applications will increasingly need to be concurrent. They want to fully exploit the continuing exponential CPU throughput gains. An eight-core machine. That means your single-threaded Ruby program running absolutely as fast as it can will only use one-eighth of the total throughput of that machine. A single-threaded Ruby program will only use one-eighth. We can't make those single-threaded programs faster on a multi-core. We've got to do something differently. And they're talking about 64 cores or 100 cores. So really highly parallel machine. A 100-core machine, your single-threaded Ruby code, single-threaded Ruby program will use one-one-hundredth of the capability of that machine. And it will not be getting the speed gains of multi-core unless we do something about that. So you're going to have to learn to program like your mom. I don't know about your mom, but my mom was the person who took my brother to a sacks lesson, me to my band lesson, to swimming lessons, you know, was cleaning. And she was the one doing everything all at once. And if you're a modern mom, you can do all these things, handle everything all at once. So you're going to have to learn to write your programs to work like your mom. However, Charles Miller, he works at a Java shop. But I think what he says is very applicable to what we have to say here today. He is going through a list of books that he recommends to beginning programmers coming into his shop. And he says that one of the books he recommends is the book called Java Concurrency and Practice. He says every new developer gets a copy of this book. And they need to read it immediately because writing multi-threaded code is hard. And the number of things that Java does under the covers to make it more efficient makes it even harder. So unless you understand the subtleties described in the book, how Java shares data between threads, you will screw it up. And you'll do it in a way that will be almost impossible to test and almost impossible to debug. Now, he's talking about Java, but the threading model in Java is not that different from the threading model in Ruby. And unless you understand the intricacies of what's going on with threads and understand the implications of writing multi-threaded programs, your programs will fail. How many people are excited about the fact that Rails now, or at least very soon, will be thread safe? Are you going to take your application and immediately run in multi-threaded mode? Good. I've sent some hesitation in the room because even if Rails itself is thread safe, even if they've done everything entirely right in Rails, I'm guessing that your application, unless you have planned for it to be multi-threaded, will fail in a multi-threaded situation. I can almost guarantee that. So instead of the modern mom being able to handle everything all at once, all the time, very calmly, your program will look more like this. Okay, so you want to write a concurrent program. Well, that's going to be involved in that. Okay, so let's demonstrate a concurrent program. Okay, we're going to write a simple concurrent program dealing with an account. Now, an account is a super simple object. It's got an amount that it keeps track of, and you have a debit command right there that subtracts a particular amount from your account. You've got a credit command that adds in a certain amount, and that's it. That's all there is to an account object. Let's come down here. We're going to create a bunch of threads. See, the threads variable right here, that's set to be 10 by default. So we'll create 10 threads. And we will iterate 100,000 times and credit that account with a dollar. So 10 threads crediting a dollar 100,000 times each. You should end up a millionaire in your account using this, right? Simple, dead simple class, really simple use of threads. Let's actually run this and see what happens. I figured all my money go. We have half a million dollars missing. Texas. Let's do it again. Oh, $600,000 missing. More than. Let's do it again. What's going on here? Every time I run this program, you get a different result. Plus the deal. This shouldn't happen in programming. You write a program, it's a stupid computer. It does the same thing every single time, not when you're multi-threading. Okay. So let's look at what happens. Here's a representation of the program. Say we've got two threads and they're both running this increment command on at amount. And it's running in two threads. So when threads run, they don't actually run exactly at the same time. Well, on a multi-core they will. But in typical multi-threaded, you switch between threads. So things can happen at different times. So in our case, let's say that we copy from at amount into the local storage of this thread. And we copy a 23 up into that local storage. Now, after we copy it, we're getting ready to increment that, but we don't yet. Because the scheduler comes in and says, ah, your time slice is done, let's run this thread over here. And that thread does the same thing. It copies the 23 into its local register. And oops, the scheduler kicks in again. So let's go back. And now let's increment that amount and copy it back to the at amount variable. And now it's 24. And then the scheduler kicks in again. And oh, we store 24 back into a variable that already held a 24. Started with 23, did two increments, and ended up with 24 when it should have been 25. This is known as a race condition in multi-threaded programs. And it's all because events, these little micro-events that happen at the statement level in your program, your statements are not atomic. They are made up of tiny little statements that can be subdivided, and you can switch tasks in the middle of that. It all has to do with the ordering of these steps. One, two, three, four. And we've got a problem. If the steps happen in a different order, say one, two, three, four, you would actually increment it correctly. So your result depends upon the exact ordering of the little micro-steps that happen in the execution of your program. That is a race condition when the result depends upon the exact steps. The key to fixing a race condition is to write code that guarantees that certain steps happen together, called a happening atomically. You cannot separate these two events. To make this work, to make steps one and two happen together, and make steps three and four happen together, we use what's called a critical region. You want to disable context switching right before you run the credit command, and immediately after you're done, you re-enable context switching. Something like that. Actually, you only have to disable context switching that affects this same piece of code. So, you know, this is a little strongly stated, but this would be good enough to prevent the race condition. And we can do that quite easily in Ruby, actually. This is not a problem. We can fix this. We have this thing called a mutex. You require a thread, you create a mutex, and you say mutex synchronize do account credit one, and that means all the little micro steps within that credit command will happen within an atomic unit of time. And no one else referencing the same and synchronizing on the same mutex will interfere with your atomic activities. So now this is thread safe at this point in time. Yay! We have written thread safe code. Let's demonstrate that. So let's look at version two of this code. We create our mutex. We iterate our thousands of times. And every time we do, we synchronize and do our credit operation there. And that's really the only change. So let's run version two of this. And, wow! Our money is safely credited to our account. Yay! We have a thread safe program. Now, it's a little inconvenient that we have to make sure we call synchronize every time we credit or debit the account. You know, we like to abstract things and hide things away behind interfaces. But it would be nice if we could put the mutex inside the account object. And then, every time we debit and every time we credit, there we go, we automatically synchronize. The beauty of this approach is that down here, this loop where we actually use the credit and debit commands is the same. It's the same as the original non-safe, non-thread safe version of it. We've moved all the thread safety into the object and we don't have to worry about it. And that's the goal of us programmers, right? We want to make it easy for people to use our objects. So we move everything inside that and now everything's safe. And this version is safe as well. It will run perfectly fine. Okay, look at version 4. Version 4 uses the same account as our last example where the mutex happens inside. Everything's thread safe, right? Because we're synchronizing, we're doing it right. Okay, but this program is a little bit different. Instead of one account, we have two accounts. Account A is initialized with $100,000. Account B is initialized with no dollars. And we want to move from account A to account B $1 at a time. So we're transferring. We debit A and we credit B. And we do this while the amount in A is greater than zero. So while there's still money in account A, we want to move it over account B. And we're going to do this using, let's write it here at the top. You can see we're going to use 25 threads to accomplish this. So 25 threads are going to be moving money from A into B $1 at a time until A becomes empty. Ruby race 4. Ah, fine, no problem. Our thread safety works. Are you happy? Nope. I think those extra $1 come from. That's not right. No, now it's $13. That works. We're okay again, right? The program's fixed. This is the danger of multi-threaded programs. Errors can be there and you will never, ever see them. Until the timing happens just right to cause problems. What's going on is that we're checking the amount here. We are reading the amount outside of the synchronization. And we got 25 threads all checking to see if the amount is zero. When we get down to $1 left in account A, all 25 threads are going to come in and say, oh, we've got $1. We can transfer another $1. And some portion of those 25 threads will go ahead and run. So that portion of those 25 threads will all decrement $1 from A and do it thread safely to A and deposit it in B. But since there's only $1 left, not all threads are going to be able to successfully transfer. So they will fail and actually go beyond the thing. So what we need to do is to make this thread safe. Let's look at version five. We have to actually, and this is where it gets ugly, I could put the synchronization right here. That means the entire loop would be inside the synchronization and you don't want to do too much work while you're synchronized because that locks everybody out from doing any useful work. It would be thread safe, but only one thread would do all the transfers and all the other 24 threads would sit there waiting for the lock to be undone. And when the lock was undone, they said, oh, the thing's empty. We have no work to do. So the entirely pointless operation to make this multi-threaded. To make this useful, you have to check the amount outside the synchronization, but inside we need to recheck it. And then now we're checking the amount again just to be safe so that fraction of threads that think, okay, the amount's zero, they will come in. One guy will get in on the synchronization. He will say, oh, it really is zero. We really have one left. He will decrement it. All the other guys who got in now synchronize and say, oh, it's changed since I checked it in the while loop. I'm not actually going to do the decrementing now. And this is the thread safe version of that. Oops, five. And this will work. And this will work every time we run it now. I like that. Yes. Am I sure? How can I be sure? I could test it. But if I test it once, is that sure? If I test it twice, am I sure? If I test it 100 times, will I get more? Sure. But it's difficult. It's difficult to test for multi-threaded problems. It's really hard to do. Now this is really unfortunate because now all of a sudden I've got a synchronized outside of the account again. And that's because we're going between two accounts, which is really unfortunate. And we've got one mutex that is used to lock all accounts. That's not good. So here's another small variation on it. All I'm going to do in this version, this would be version six, I believe, is that I'm going to move the mutex back into the account. So every account gets its own mutex. And I'm going to provide a synchronized method on account so that in my loop, I do my while, I synchronize on A, and I synchronize on B, and then I do... So I'm synchronizing twice on both. Now each account has its own synchronization. I have to synchronize on both of them, and this should work pretty good. There's no real problem with that. I'm going to optimize this a little bit by moving the F amount test outside the desynchronization. It doesn't have to be there, but basically this will work as well. Okay, we've got about 20 minutes left. So let's move on. One more kind of race condition here. And I'm going to again start with the account that has the mutex, anything. It's got the explicit synchronization. And this time, we're going to create... Let's see here. I'm going to create an array of accounts. Here it is. I'm going to create an array of accounts, one for each thread that we're running, and I think I've got five threads in here. And every account has a $100,000 in it. And then I'm going to create a bunch of threads and take the from account and a to account and transfer all the money from the from account to the to account. Actually, not all the money. What's transfer? Do you transfer? Do-do-do-do-do-do-do-do. I transfer... I'm just transferring $1. It's not a loop. Transfer just transfers $1 from one to the other and uses the proper synchronization. From account gets synchronized. To account gets synchronized. So I've got five accounts. One transfers to two, two transfers to three, three transfers to four, four transfers a dollar to five, five transfers a dollar back into one. So I'm moving $1 all the way around the loop with five threads. So it's like everybody takes a dollar and shifts $1 one account clockwise through that ring of accounts. Very simple, right? So this is version seven, Ruby race seven. What's going to happen? Deadlock! Bail? What in the world is deadlock? How many people have seen a deadlock program in Ruby? Okay, okay, great. Deadlock happens when you have all the threads in your system waiting on something to happen. And because all the threads are waiting nothing can happen. So we've reached a point where all your threads are waiting and cannot run and it's called a deadlock situation. In this case we had one waiting on two, two waiting on three, three waiting on four, four waiting on five, five waiting on one. So we had a chain of threads waiting for each other all the way around the circle and at that point once that happened no work could be done. That's a deadlock situation. Turns out deadlock can happen anytime you lock more than one resource at the same time. There's several solutions to deadlocking. Depending on what kind of application you're in and what your needs are the different solutions work better. One way is just to prioritize all your resources. So race version eight adds a priority field to the account. And then when you create an account it gets created with a particular priority. When we do the transfer we get the first and second thread. We take the from account and to account and sort them by priority. So in effect it'll be one waits on two, two waits on three, three waits on four, four waits on five. Five will not wait on one because we invert the order of their synchronization. So five's not waiting for anybody at all so you never get the deadlock situation because one will have to wait up. When you have one and five together one will always wait on five. So by prioritizing your resources you can get out of the deadlock thing. Okay, so what have we learned here? Oops, that was the wrong, that's tomorrow's presentation. Okay, three simple programs all involved multi-threading all failed in a strange and marvelous way in ways that were hard to test and hard to detect unless you thought about them up front and did that. Threading is hard. Now the solution to threading problems is number one, protect every shared memory access with some kind of synchronizing lot. Okay, we did that with the mutex. That was our mutex thing. Make sure, yeah, every access reading and writing. You can't just read because it might change from underneath you. Okay, be aware of extended situations that need to be atomic. In our case the two accounts needed to be atomic together and it wasn't just enough for each account to be thread safe. The transfer of that money had to be thread safe as well. So these are things that can slip by you that you're not aware of. You need to have a strategy in place to avoid deadlock and you got to think about this up front because deadlock will only ever happen at midnight on the weekend. Okay, number four, you need to evaluate every single line in every library that your program uses to see if they also follow rules one through three. What? Yes. It's not enough to write your app to be thread safe. You have to restrict yourselves to libraries written by people who are just as dedicated and just as smart as you are and have thought through the threading issues and made their libraries thread safe. Then you have to evaluate I'm using these two things that are by individually thread safe are they thread safe together? Because we saw that two accounts individually are thread safe may not be thread safe together. This is a hard problem. If your program is small, yes, you can go through every single line of your program in every single library you use and do that. But if you've got a huge program this is nearly impossible. Especially if the libraries were not written with thread safety in mind. So, couple horror stories. I actually used to deal with real-time data working in a multi-threaded environment. And I remember my very first multi-threaded program threading is cool. It is fun to do. And I encourage you guys to go out and try it but be aware. It's hard to get exactly right. So we wrote our application and it was a real-time data acquisition system with lots of threading going on and we were removing data from this area of the program to that area of the program we had to lock that memory. The locks provided by the operating system were rather slow. And so we thought, we can do better than that. We can design our own synchronization method and bypass the locks the operating system gives us. If you ever start thinking in this manner okay, don't. So we analyzed this. I mean, we knew what we were doing. We had version one of the thing out already. It was thread safe. We went to classes on thread safety. We knew what the issues were. We analyzed the assembly code generated by our solution and we determined that it was very safe. Not 100% safe, but very safe. In fact, we calculated the odds of it failing to be about one in a million. And I felt very good about being one in a million safe. I mean, I'd write on an airplane that was one in a million safe. So we put the thing it didn't go into production, but it did go into test. We were testing our system and we found that in the test cell our system failed about once a day. We just freeze up and lock. What? So we thought about this and see, let's see, we're gathering data 10 times a second. There's 60 seconds in a minute, 60 minutes in an hour, eight hours in a working day when the things are running. That warms into about a million operations a day. One chance in a million failing. Yes, we pretty well nailed that. Rake. You guys are probably familiar with rake. I recently, not so recently, a couple of years ago added a multi-threaded option to rake where you can say this task is threaded and all its prerequisites will be run in threads in parallel. And that was kind of handy if you really want to take advantage of that. And it wasn't until this past summer that someone pointed out we have a locking issue, a potential race condition in the threads. It was in there for a year and a half, two years, nobody saw it. Don't know if it even ever happened in real life, but someone detected it by examining the code and pointed out to me which we then had to fix. The double-checked lock is an interesting example. This is a Java example, so I'm not going to dwell really long on this. But people decided, again, they decided that the locks provided by the system were too expensive and they wanted to get around it. So they came up with a really cool idea. Let's see if this is kind of like what we did in our code. Let's see if an instance is null. And if it is, we need to create one. But if we have multiple threads trying to create a single thread at the same time, that's not going to work. So we're going to have to lock it. Check again if the instance is null. And if we're inside the lock and it's still null, we should be safe. We then create it and then we're done. So if anybody else comes in and they see it's null but don't get synchronized fast enough if we beat them to synchronization, they will then get synchronized eventually and say, oh, it's already created. The problem with this is the way that Java memory works and it's a really technical detail I'm not going to go into. I don't really understand it fully myself. But Java is allowed to do memory writes at times a little bit different than what you would expect from the source code. And it is allowed to reorder those writes in such a way that the object returned is actually never initialized by the constructor. If you test it like this, this is wrong. It will fail one in a million times. Something like that. So don't do this. You can sit and down and analyze this all day long and it looks perfectly safe unless you know the details of the memory model behind the Java virtual machine. So, again, it makes writing multi-threaded code really, really hard. And it's hard because of why. Because we share mutable memory. Memory that can change is shared between threads is where the problem is. So what can we do to make concurrent programming a little bit easier? And we are down to eight minutes so I'm going to jump through these last examples very quickly. I want to take a look at some other programming language. Paul Graham says that people sometimes get so enraptured with the programming language they're using today when they look at other programming languages they view it through a lens that filters out all the advantages that that other programming language has and they will never see it. I want to break that lens today and I want to show you two examples. If I have time I'll briefly mention a third that I find really interesting. But at least two other languages that have a unique approach to handling shared mutable memory. The first one avoids the problem of shared memory and not sharing memory and not mutating memory and that's Erlang. So Erlang is an interesting language. Imagine a language that has no variables, has no signless statements and no explicit loops and otherwise variables don't, assignments can't and loops never do. You only have constants. You once you assign to a variable you cannot change it so you cannot mutate it and that would create restriction. It does not have a signless statement it only has pattern match which is interesting at the south doesn't really impact for multi-threaded programs but the other thing is that loops if you have a loop you generally have a variable that changes every time you go through the loop and if that variable changes that's mutable states so loops assume you have mutable state. So Erlang says we're not going to loop we're going to do everything entirely by recursion. Ouch! That hurts a lot of people's heads. But you avoid mutable states. So here's an example of an Erlang program it's based on pattern matching fact zero will always return one if you have fact of any arbitrary variable the answer will be n times the factorial of one less than that number that you passed out. It's a straightforward mathematical definition using recursion instead of explicit looping to calculate it. Now the problem with this and the problem with recursion is that as you recurve over and over and over again your stack grows and this version of the factorial will actually have that same problem. So what you would generally do in a language like Erlang is right it's something like this and let me warn you my Erlang code is not I'm not an Erlang programmer I'm just kind of dabbling this I'm sure those of you who have really studied it can do a much better job. But you break out the factorial into a helper function and initialize it with one. So in fact two if you get a zero and accumulator you return the accumulator. If you got some arbitrary number in the accumulator you call fact two with one less on that number and the accumulator multiplied by n. Now the interesting thing about this is that recursive call in fact two excuse me when you return from that recursive call it is it has nothing to do but return immediately so as soon as fact that intercall to fact two is done it returns immediately and that's called a tailed recursion because the call is done there's nothing left to do. You can implement tailed recursions as jumps back to the beginning of the function and just re-execute the function and reassign the the input variables but that's an implementation detail. Mentally we still think about this as recursion internally it's implemented as a loop so there's no step growth on this. So you can infinitely recurs in Erlang if you use tailed recursion. I am running out of time so I'm just going to state Erlang demo runs without thread problems. All this code is available on my website actually it's available on GitHub so if you want to download these things and look at this and critique my Erlang code that's cool but I really want to get to the next language as well so let's kind of move on here. Closure is a version of Lisp that is entirely functional well it's good enough functional unlike Lisp it's just kind of functional closure really is functional but it gives you some holes that are still thread safe let's talk about how it does oh yeah people get down on Lisp but Lisp is fun and if you've never studied Lisp I really recommend you do that and the prags have just put a book on closure in beta just a couple days ago so I've got that I'm working through that because I'm really excited about that so there's an ad for you quick list primer numbers are numbers names are they're called atoms and they're part of the thing you put things in parentheses that becomes a list of items you put things in square brackets in closure that becomes an array of items functions are prefix so plus two and four we add the two numbers together if you count the tick means a quoted literal so tick ABC means the list of A, B and C it doesn't mean evaluate the function A with the arguments B and C so tick creates a literal so plus two four is six tick plus two four is plus two four you define functions using def n the name a list of arguments and then you know it's got a lot of parentheses but you can actually read this if the number is zero the answer is one otherwise multiply the factorial this is non-tail recursive in closure if you want to do a tail recursion version of that you use the recur keyword and a loop keyword and it recursively calls loop over and over again re-assigning the instance variable so this is very similar to the Erlang version but in the closure style there's lots of functions in closure to manipulate lists and sequences and here's some examples of that some other cool stuff repeat one repeat creates an infinite list of the number one and you can cycle through those and I got one minute so I'm really going to move on you can interface to Java really really easy closure runs on the JVM okay two kinds of variables in closure there's a variable that is defined to a thread actually shared among multiple threads but if you modify it that modifications only can be seen in a single thread so you can have variables that change in closure but the changes are restricted to whatever that thread that's actually kind of a useful thing to do however sometimes you want to share the change values between two threads and closure has another way of doing that you create a reference here we create a reference of zero r and we can deref r and that will return zero however if we try to set ref set r1 it will fail that is an error in fact I think it might even be a compile time error it detects it right away before running the code because you are only allowed to change these kinds of variables within a transaction you create a transaction do sync and within a do sync call you can set r1 and then when the transaction is over all the other threads see all the changes to all the ref variables atomically so there's never these in between things now the interesting thing about this approach is if you use ref variables in closure to share state between threads you never have to synchronize you never have to explicitly wait you cannot get in a deadlock situation cannot happen all these things all these evil things that happen to you in a multi-threaded program in closure do not happen because the language is designed with that in mind I think it's a really really interesting idea a really powerful idea I want to certainly explore some more in my own coding so again I'm out of time I'm going to skip the actual demo codes up on my website you can run it by the book summary concurrent programming it's hard and if you want to do it in a traditional language if you want to use Ruby I really recommend staying sequential as much as you can if you have something that's really heavily multi-threaded take advantage of something like Erlang which handles it beautifully or take advantage of something like closure which also has a different way of handling it but also very elegant and deal with that there will be a blog programmer be open to other ideas so there's where the site is so you can download it from there