 Welcome to Emerging Languages Camp 2010. Go by Rob Pike. I'm not the sole inventor here. There's other people involved. I don't want to take full credit for that. But thanks for letting me come and talk a little bit. I'm actually giving three talks at this conference and dividing up the pieces is kind of a bit of a challenge. So I thought today, because this is specifically a language session, I'd focus on a particular detail in Go that has gotten a lot of attention, and that is the concurrency stuff. A lot of the people who've remarked on Go or looked at it have thought that there's something really unusual and maybe even unique about the concurrency features inside Go. And I think they are unusual, but they're not without origin. They actually have a long history, and I thought it would be worth telling this community, some of whom might not know the background where these ideas come from and why Go is the way it is with respect to concurrency communication. The story actually started in the 1970s. Can somebody start the timer there? It goes back to the 1970s when multiprocessors were really a research topic. I wouldn't say there are no multiprocessors available commercially, but there are very few, and I don't remember when they even were. I do remember some other things about the 1970s though. And the problem was that nobody really understood how to program these things. We knew it was a good idea to have a lot more processing available, but we didn't know what to do with it. And there were a bunch of sort of research ideas that floated around at the time, things like monitors, semaphores, locks, which we now know more as mutexes. And there was also a sort of line of inquiry involving message passing. But it wasn't clear which of those really worked well without the proper hardware to try them out on. It didn't seem like we really understand them. A lot of these things were brought up in the context of understanding how to program attached devices inside a single processor system. So it's kind of, you know, what are we doing here? We don't get it. We know it's important. What are we going to do? And so the real breakthrough happened in 1978 when Tony Hoare published a paper called Communicating Sequential Processes, which is locally called CSP. And for some of you may have heard of this, it may seem like this is some sort of academic silly little thing. But I'm going to explain to you there's actually a lot going on in the original ideas of CSP and we still haven't fully mined out this paper. It was a truly seminal paper. What it proposed was a model programming language, not when you could actually execute, although some people later did implement it pretty much as defined in this paper. And it was very unusual. It was really a groundbreaking idea. What it did was take, it turned I.O. into a fundamental property of a programming language and coupled that with this notion of parallel composition of sequential processes. So you have things that are independently sequential, but you compose them and you have them communicate. And that model is the fundamental property behind essentially everything we're talking about today. So there were several things in here. Well, actually really only two things. First of all, communication that is I.O. is fundamental and parallel composition of sequential processes gives you multiprocessing. That's all you need and even to this day is pretty much all you need if you think about things the right way. This paper is pretty amazing. There are more ideas in this one paper. I just read it again last week when I was putting this together. There are more ideas in this than probably any other ten papers you could easily put your hands on. It's really worth talking about more and I'll talk a little bit more about the paper itself later. So what does this language look like? Well, it's pretty mathematical and strange, but it's actually kind of pretty in a way. It generalized these things called guarded commands or Dijkstra's guarded commands as they're more commonly known, which is basically just a way to express conditional processing in various ways. Horde generalized this into this idea of communication as the guards and the notation developed had several simple elements. This is not complete, but it's pretty close to a complete definition of language. So P-Bang value sends a value to a process. P-QuestionVar receives that value and stores it inside a variable. Then there's various ways to run commands sequentially and parallel, repeatedly and alternatively. So the last really strange thing is really in this statement, except in the nature of Dijkstra's guarded commands, the various conditions, the guards, little a and little b, are actually evaluated in parallel, whichever one can proceed as the one that goes. And those guards, of course, can be communications or, in fact, much richer expressions. And then on top of that, you have this notion that when you communicate, when you exchange a value on P, the sender and the receiver are synchronized. And that's it. That's pretty much the whole language. And so here's an example. I'm not going to talk through this example because it's, you know, messy. It actually does work, I believe. I'll show you later. I translated it into Go and the Go version works, so that implies this works. So there's three processes here. Copy, disassemble and assemble, copy. Copy is all the characters from West to East. Disassemble takes a set of characters on 80-character cards. Remember cards? I do. And just emits them as sequential characters. And then the assemble guy takes the incoming sequence of characters and reassembles them into 125-character cards. So this is a very stupid problem, but it was the kind of thing you thought about in the 70s. I've got 80-character cards. I want 125-character cards. How do I reformat it? And that doesn't sound like a hard problem, but if you try to write it in a traditional standard programming language, especially one from the 1970s, it's a remarkably fiddly, annoying problem. And here is this sort of three-element process that we then combine at the bottom. And this is really the key point. This bottom line here is the parallel composition of those three processes. And you can observe that you don't actually need copy. You can run disassemble directly into assemble. But the reason copy is there is in the paper he substitutes other processes inside there to do other intermediate operations on the characters as they go by. And of course, to anyone who's programmed in Unix, that looks just like a pipe one. And that's no accident, but in 1978, pipes were new. And this was a way to formalize the idea of a pipe inside a language. There were more properties in this language that were interesting both for good and for bad. The ports that were communicated on, like the east and the west in that previous example, were actually processes. There were names of processes. The idea was you communicate with a process. This is, after all, communicating sequential processes. But they were not programmatic. They were just defined in the program. And so that meant that, for instance, you could write a concurrent prime sieve to generate the first thousand primes. But you could write a program that generated n primes because there was no way to parameterize the number of processes. Those processes all had to be created statically. As a slightly more interesting example, he gives an example where you do matrix multiplication on a three by three matrix. But you can't write the code for an n by n matrix in this language because you need a process for each of those dimensions and you can't express the dimensionality as a variable because the processes have to be static. And this will become very important in a minute. But there are some other nice properties of the language. It had this idea of pattern matching to analyze. So remember those guards that do conditional execution? Here's a guard that says if we receive from C a pair x and y, then you can do A. And the point is that x, y is kind of a pattern match. I'm expecting an x and a y. I'm going to receive a pair on this thing. So if you cannot parse the message into an x and a y, then that guard blocks and another guard must execute. And you can write more general conditions like you can say if I is greater than 100, then I can receive an x and a y from C, then going to A. And so that's a more general thing and you'll see an example of that in a minute. One interesting problem with CSP from where we're going to go is you can't use a send as a guard. You can only receive. You can only say, am I allowed to receive one of these? You can't say, am I allowed to send one of those? And that didn't seem like an issue at the time, but we'll see. It actually is a generalization we'd like to make. So to recap, you've got this paper that defines a whole bunch of things from scratch in a kind of mind-blowing way for the time. You've got this idea of a parallel composition of independent processes. You've got communication and synchronization combined into a single set of operators. You've got things that don't share memory. These are independent processes and they just exchange values. They don't share anything. So there's several models you'll recognize that are not supported by CSP. In particular, they're not threads. They're not mutexes. Things are different like these all together. And so here we are in 1978. We have a paper that pretty much lays out everything we need to know about how to program multiprocessors, how to think about concurrent algorithms, how to do communication and synchronization together. We can program parallel code and there isn't a lock anywhere. So this is a pretty important step. But it's also a theoretical paper. I would bet money that every program in this paper is correct even though it was never been executed. But that's Tony Hor. That's not a normal person. And so we have to turn this into something useful. And what happened was, so the next few years people sort of digested this and the road sort of forked into multiple paths. One of the paths generated a language called ACOM. And ACOM is interesting in this story because it's the closest to CSP. A company called Inmos in Britain, in Bristol, designed a machine in the early 80s called the Transpuder. And the Transpuder was a parallel piece. It was eventually a bunch of processors on a chip. They were unusual processors, but they were general processors. But they had essentially CSP connections in hardware. So each computer could talk to his neighbors with a communication synchronization device. And Inmos designed this language, ACOM, to represent the communication capabilities of this hardware. In fact, the ACOM compiler was written in ACOM, although it was something of a tour de force. ACOM in general to write code like that. But it was a really interesting piece of hardware and an interesting language that was fundamentally CSP with slightly different syntax. But you recognize the bang in the question, the conditions. This program receives 100 things from C1 and 100 things from C2 in arbitrary order, adds them all up, and then sends what the two counts were at the bottom. And if you squint a bit, you can see that it's very much like the syntax of the original CSP. And in fact, Tony Hoare was a consultant on this system. And it was pretty important because it showed that you could actually write real code that really ran in this world. But it was a peculiar language whose lingering effect might be using white space for indentation rather than anything in the communication model itself. But another branch that came out of here, although quite a bit later, was the Erlang branch. And Erlang differed from ACOM in that it was very network, it was functional, and it focused on the pattern matching capabilities inside CSP rather than just the basic communication. Erlang was developed in the late 1980s at Ericsson. There's, I think, a deliberate question whether the language is named after a Swedish computer scientist or stands for the Ericsson language. But it's different because it's profoundly functional. Now the original CSP had variables and you could assign them and stuff like that. But without making much mental state at all, you could turn into a purely functional idea where a variable is only assigned once it never changes value. And that has certain expressive capabilities. They're important when you're trying to exchange messages between processes without sharing data. And so Erlang took that, it promoted functions and processes as things that work across a network, and it brought in the idea of a mailbox. So a process in Erlang is very much like it is in the original CSP. But there's now this somewhat richer model of a mailbox, which is a thing that parses input that it receives. And so you can see this alternation here is receiving a bunch of different things and depending on the elements of the messages you execute different commands. So the guards become these parsing sort of message unpacking operations. And Erlang is actually quite a beautiful language. A lot of interesting software has been written in, a lot of commercially important software. But it's unusual because it has this profoundly functional and very sort of rigid model of the communication. I don't mean that as a criticism, I really think it's kind of beautiful. But it's not the kind of language that I, for instance, want to write code in all the time because I can't communicate with... I can't write code comfortably that isn't somehow expressing the things I want to do like write operating system software and stuff like that. And so we come to the third branch, which is the New Squeak Limbo Go branch, which is clearly where we're going to end up here. And here the focus is different because in the Auckland branch, we really based on ports and communicating basic messages between processes. Erlang, we've got this mailboxes and parsing. And in this branch, we're going to focus on a thing called a channel. There was a paper that probably none of you have read, and I don't think you probably want to dig back that far, but in 1985, Luca Cardelli and I wrote a paper for SIGGRAPH on a language called Squeak, which is unrelated to the thing that's called Squeak today. And Squeak was a toy language. It was kind of like a yak. It takes a grammar and generates a C program for you. The Squeak processor took a concurrent program and turned it into a sequential program with a state machine compiled into a C program for you. So you could write code with this, and it worked, but you wouldn't want to write much in it. It was really a research idea to show what would happen if you use the ideas of CSB to program graphical user interfaces. And it was a research paper, and it was kind of fun, and actually sort of influenced some thinking for a while, but I don't think it has a sort of lasting legacy, except that you can see this is very CSP-like, and it also compiles and runs. You can actually execute this code inside a Squeak with the processor. And again, the important thing is down at the bottom here, you see the parallel composition of, that's type as in typing on a keyboard. This is a program that lets you type code and then type text and have it appear in a window on the screen. So the typing program is the composition of the mouse process, and the text process. Okay, but we haven't got channels yet. These things are communicating between processes. A few years later, I've been doing a lot of graphics work and wanted to write real graphics code, and the ideas in Squeak were interesting to me, but I didn't have a programming language I could use to express them. And so I took some time and designed a language called New Squeak, which I thought was a good name, and around 1989 this became an actual implementation you could use to write real code. And several quite interesting programs were written in it. Again, it was a research language, you wouldn't want to spend your whole time in it, but it was actually fun to write code in and there was some important stuff written in it. Syntactically it looked like C, but it was a purely applicative language, and it had this concurrency stuff built into it. And so here we have the first language that I got to use where I could write real code with the ideas of CSP, and it was very important to me and a lot of interesting things came out of it. New Squeak had lambdas, which were what I called pro's in the language, it had a select statement which was essentially the alternation of CSP, but it was restricted to communication. You couldn't write composite conditions on those branches. It had also two, for you guys, things that you may recognize, they're going to turn up and go. Two syntactic inventions arose in New Squeak. One was the left hour operator for communication. I managed to find a way to express receive and send using the same operator which cleaned up some syntax. It also made it possible for the receive operation to not store something in a variable, but to just be an expression. And of course that was important for someone with a C background to just have receiving a value be an expression. And then also, I was very proud of this, in the Pascal notation that New Squeak used for declarations, you declared a variable by saying x colon int equals 1. That declares a variable called x type 1, type int value 1, but it's obvious that 1 is an integer, so why bother to type all that? So in New Squeak you could drop the int out and then that made colon equal a kind of pseudo operator that declared and initialized the variable. And that lingers on and go, and it's a very nice convenience that turns out to be pretty important to the way it works. So maybe this is my major conservation of computer science, I don't know. Here's the prime sieve, the famous prime sieve, the concurrent prime sieve, it's really not the sieve of erotostomies, but it's in the CSP paper in CSP. I didn't show it here because it's quite difficult to understand in CSP, it's a little easier here. Again, I'm not going to talk through all this, but there's a couple of important things to recognize in here. First of all, these are three process definitions. Counter, which just pumps out a bunch of numbers. Filter, which does the primality filtering. And sieve, which is the thing that composes a chain of processes to do the sieving generally. What's interesting here in this program is in the middle, it says sieve is a program of chan event. That of chan event means that the value of the sieve program is a channel of integers. And this is actually the moment in this story where something different happens. Because having a function that returns a channel gives you a power that was not available in any of the other languages, at least not with nearly this sort of clarity. So this was a bit of a breakthrough for us. This is the way to write the prime sieve. It doesn't matter how many primes you get. You don't have to pre-declare a thousand primes or something like that. This gives you the programmatic ability to stitch an arbitrary dynamic structure out of channels. And so channels here are first class values. And this is the fundamental difference on this arc. The thing about a channel's first class value is not only can it pass it around, use it as a return value for a function, there are two important things that come with that. One is that I can send you a channel on a channel. If I have a channel, I can give it a type channel, and that means I can send a channel along, just like in a regular channel, I can send an integer. And that gives me the ability to communicate, the ability to communicate. And of course, everything in computer science that matters is an indirection. We've just built that indirection and communication, which is fundamental to a lot of these models. And secondly, that channel is itself a form of capability. If I give you a channel that talks to me, then you can talk to me on that channel, but you also have the capability to give that channel to somebody else. It becomes a handle, a bit like a file descriptor. Whoever prepared that channel can hand it off to anybody who can continue to hand it over to somebody else. That indirection of capability to communicate is really important to doing things like building multiplexers, constructing arbitrary security barriers between processes and things like that. It's a very, very important idea. Now, things like that exist in languages like Erlang, but not as generally. In Erlang, the notion of process ID is a first-class value, and I can receive capability away by handling that process ID as a value, but it's not fully symmetric. The send capability is actually missing. So the problem with Newsquake was it was just a toy that ran very fast, and we liked the idea so much, we wanted to actually use it. And there, Phil Winterbottom, who was working with us at the labs, decided to take some of the ideas from Newsquake and make them work inside a real systems language, which he called LF. And LF, superficially, you can think of as just C with this concurrency stuff built in. You wouldn't be very far off the mark. And it was a lot of fun to write in because we finally had an efficient language you could use these concurrency and communication ideas in, but the problem was it was a C underneath, and so you had to deal with Malik and free on your own. And all the other languages we've talked about up to now, memory allocation is sort of implicit in the language, which is another way to say it's garbage collected. LF was not, and half of the code in an LF program that did the communication had to figure out who was responsible for freeing this object you were passing on. Not having garbage collection available in a concurrent language turned out to be a bit of a deal breaker. And it wasn't the reason LF died, but it was a reason why it wasn't, as useful as it might have been. Without garbage collection, it just wasn't as nice to use as it should have been. And believe me, there were people like me saying, come on Phil, it needs to be garbage collected. But for his own reasons, which were probably good, he didn't do it, and that sort of was a problem. But a year or two later, Phil, along with Shrondor and myself, did a language called Limbo as part of a project at Lucent to do TV set-top boxes and stuff like that. It was exactly contemporaneous with Java. There was a little VM, and it had all this nice... It was quite a nice little language to work in. And it was probably the most successful language on this stream so far, on this branch of the sequence so far. Because it was a true embedded language with communication primitives, concurrency. It was garbage collected, of course. It was very nice to use in a lot of ways. And it was successful within its limited domain. I think not as successful as it should have been given the quality of the work in it. It was pretty much sort of rethinking New Squeak 10 years later with a much better grasp of what needed to be in a language to make it work. And again, channels were the fundamental thing here. The Limbo had channels as first-class values. But then things happened, things starting with J, and sort of died away. So about 15 years go by. And I gave a talk about three or four years ago, which is on YouTube, about New Squeak, really about New Squeak in a language series there. And it got me thinking that it was time to revisit these things. So a few years ago, Robert Griesen and Ken Thompson myself started talking about this stuff again. And we thought for reasons that are... I'm going to talk about a lot more tomorrow in the Oskan keynote. It was really time to rethink the whole language approach in general. But as a consequence of that conversation, we've now got a language, which is called Go, that is a compiled, object-oriented language with concurrency built in. And it's got... It's a modern language with these CSP ideas available as first-class ideas inside the language. And it's wonderful. And when people say that it's novel and unusual, I think they're right. What they're missing is how much delight you can get from programming with these things in a real world. And so we now got the power of CSP in a particular model that the sort of channels as first-class values branch. And you get garbage collection, but you have a compiled language, and I don't know who does cryptography calculations or graphical calculations or something like that. They will run very efficiently. And so you get the power of concurrency in an environment where fundamental computation is really efficient. And that's a really, really nice mix. Then the object-oriented stuff makes in too, but that's not what I'm talking about today. And so you get the best of all worlds in one easy package. And it's actually pretty nice to use. So here's that card reassembly example. Slightly simplified. I took out an edge condition that makes the code longer, but not more interesting. And you can see that it has some of the flavor of the CSP version. There's the copy guy, the assembler, and the disassembler. And there's the composition guy down the bottom. You actually just relaunch the three things we call them go-routines. The three go-routines together. And they do the work. And this code actually works. I ran it. It functions. It does the... Whenever I need to reassemble punch cards, this is the code I'll run. But you can see that it's changed a lot from the CSP version because you've now got a syntax that feels a bit C-like. You've got the arrow operator for communication. You've got colon equals and types. You've got channels. But fundamentally, this is the same idea. This is the idea of writing programs as a parallel composition of communicating sequential processes is now available in a language that you can actually use. And that's pretty nice. It's a long history. It's conceived by some people as being very novel. It's actually not... I think it's novel in its context, but the ideas go back longer than I care to recount. And it is nice to have these ideas available. But there are other languages out there, too. Erlang being probably the most famous that has very similar ideas rooted in the same paper. Channels as first-class values, though, are unusual, not unique, but unusual, and I think mostly on the branch of the New Squeak. And it's also nice to have all of the combination of high-level modern language capabilities available along with the communication things. CSP was about communicating. GO is about using communication inside an environment with modern language features available. And the fundamental point here is that in order to make this work, you have to have garbage collection and you have to have things like automatic stack management available. It's very hard to graph these onto another language like LF. You get into... It just doesn't feel right, doesn't work very well. And there's a lot more to go than concurrency, as I think I've hinted, and I'm giving actually two more talks tomorrow, which is a little shocking, in the Oskan branch. So unfortunately, it'll conflict with some of the sessions here, but if you're interested in learning more about GO, you don't want to go to the talk tomorrow where I'll talk in a lot more detail, but many of the features of the language, of course, golang.org has everything you need to know. If there's one thing you get today, I want it to be, I should read that CSP paper. It is an amazing paper. It is worth every minute you spend with it. And here are some quotes from it. Remember, this is 1978. I will honestly say that some of you probably weren't born in 1978, but some of you probably were. Not everyone here is as young as the people I work with. First of all, notice the first line, processes may not communicate with each other by updating global variables. That's no shared memory. Don't think like that. In the second quote, he says that co-routines, which we call go-routines, sort of in-go, are more fundamental than subroutines. I think when you realize that, you understand the power that you get out of it. And then this bottom one may be hard for you to parse, but what it's saying is that a process that you communicate with in isolation is isomorphic to a similar class or a Java class or C++ class, for that matter. There's a fundamental sort of isomorphism there. That's pretty important and good to know. So I'll leave those up and take any questions you may have. How are we going to do questions? Do I just repeat them? Yes. How does CSP relate to the Pi calculus? That's an interesting question. When I did New Squeak, I learned, because I didn't know, that it relates pretty directly because the fundamental process control stuff inside New Squeak is an expression of the Pi calculus in a certain way with operational semantics and all this kind of stuff that I am not an expert in. But people in the talks I gave about New Squeak would come up and say, do you realize that? I think the CSP paper is the godhead for a lot of that stuff. It wasn't the only thing around at the time. There were other people working in related things, but the CSP paper seemed to distill those ideas as something that you could understand if you were reasonably intelligent and get the whole idea in one paper that wasn't totally impenetrable. Yes. The question is why the Go type system isn't the same as everybody else's type system. That's a fair question. And to be honest, I'm going to talk quite a bit about it tomorrow. But the basic reason is that I'm sorry, tipping my handle a bit for tomorrow. I think that the modern type systems have a lot of power, but I think that that power is overstated and one of the reasons that there's so many people in this room who've invented new programming languages is because they're frustrated with the complexity of the type systems in the standard languages. Let me just cut it off there so it doesn't turn into a discussion of tomorrow's talk. You may be right. Yes. How do you know the code you write in Go is going to have no deadlocks or race conditions? Or be correct. The answer is there are no tools. Specifically, there's a problem that I didn't mention, which is also in this branch that Gerard Holtzman wrote that lets you use CSP to express and verify correctness of algorithms. And you can translate Go algorithms into Promenade if you want and so on and so on. And I think that's actually a useful thing to do for certain problems. The thing that is hard to understand until you've tried it is that the whole business about finding deadlocks in programs like this doesn't come up very much. It comes up all the time. And that's because it's just too low level. If you program like this, I'm not saying deadlocks don't happen, of course they do, but they don't happen very much. And when you do, it's very clear why because you've got this high level state of your program expressing. This guy's trying to send it here and he can't because he's here. It's just very, very clear as opposed to being down at the mutex-shared memory level where you don't know who owns what and why. And so I'm not saying it's not a problem, but it's not harder than any other class of bug that you would have in a Go program. And until you've tried it, you don't really see it. I'm going to put words in Robert Griezmer's mouth because he's sitting right here. But he's never written concurrent programs before and he worked with us on Go. He did a lot of the type system work and implementation stuff. And he'd never written concurrency code before, but he found using this thing that not only is it easy, he just finds that it's just sort of very natural. Am I being fair? Do you find it difficult? Yeah. Try it. You'll find that it's surprisingly nice to use. Yes. Go programs, SegFall, are they safe? Because of the memory system and the type safety, it's actually a statically safe language to very high precision. There's a way you can cheat, which is called unsafe. You import a practical unsafe, but if you import unsafe, you're cheating. But there's no type punting. You can't turn a ninja into a pointer. There's just none of that stuff. It has pointers, but no pointer arithmetic. And so we believe that if you don't use the unsafe package, it's easy to demonstrate safety in the program. You can get SegFalls if you use uninitialized, but safely uninitialized variables is the only kind you can get. But the program catches them, the runtime catches them. You can't derive... Basically, it's impossible to correctly write a memory incorrect program. Yes? There's a fair bit of... It's only been out for a few months in the public. And some people have built websites and web services built entirely on top of it. Golang.org website is entirely a Go program. But the most interesting example is one I can't describe, but we are using it internally for some production work inside Google. There's fairly high traffic stuff, and the people of use that are pretty happy with it. I think it's ready for prime time for certain categories of problems. I think we've reached that point. Yes? Can I give an example of why the channels are more powerful than Erlang's mailboxes? No, because I don't think they are. I think they're just different. I think Erlang mailboxes have a certain expressive power. They do certain things well, certain other things not so well, and channels sort of are different. Process IDs cannot capture is this idea of a general capability to communicate. You can capture the send capability, I mean the receive capability, but not the send capability. And I'm sure you can prove that they're isomorphic. But the feeling you get with a channel is this thing I have in my hand is the ability to communicate. And I can build data structures with it. I can send it to somebody. I can receive it from somebody. It's a very sort of a nice encapsulation of the idea. And Erlang encapsulates it in a completely different way. And I wouldn't want to go to the mat saying one is better than the other. But they feel very different in how you use them. Yes? Do I think Go is appropriate for language runtime implementers? Probably. If you want to let Go do the garbage collection for the most part for you, it might actually be a nice choice. It's got a runtime that's under... It is... People get confused by this because Go is a compile language with a runtime. But it's not like a JVM, right? It's actually native code, but essentially it's got a runtime library that lets it manage memory and do communication, concurrency and stuff like that. I know at least one person who's using it to build some VM experiments to play with. So I know it can be done. It's probably an interesting choice. It might be an excellent choice. Yes? Has the state-of-the-art advanced the point where co-routines can be broken up and put back together as efficiently as regular subroutines? I think the short answer is no, but they're much cheaper than you probably think. The things we call go-routines, we gave a new name because they're not really the same as co-routines. They're not the same as processes. They can be extremely lightweight. Their lightweightness depends on how you use them. So just creating a co-routine to do something and throw it away is basically the cost of a malloc, which is pretty cheap. As far as general performance, it depends very much on the problem you're solving. But no, of course, go-routines have more runtime weight than a subroutine, but not a lot more. And it can actually be pretty efficient. One of the things I didn't really talk about is that Go manages stacks for you automatically. So you don't have to declare how big the stack is. It grows in strings on demand and stuff like that. That allocation requires basically allocating a stack block to launch the go-routine into. And everything else is a fairly small number of instructions to get one going. They're pretty lightweight. Yes. Just go make sure the messages are immutable or copied. It depends on the type of the channel. If the channel is a channel of immutable values, then yes, they're immutable values. If it's a channel of shared values, it's a shared value. They're like any other value in the language. If you've got the address of it, you own its address. If I have a channel of pointer-destruct of some structure, remember, channels are tight, so you can only send things a certain type. If the channel is tight, it's a channel of pointer-destruct. When I send that pointer to somebody else to work on it, I can forget about that structure. It's very different from locking to access this thing at the same time. I literally use the channel to hand it away and forget about it. You could remember it, but that's not the way... Well, then you're sharing it, and you don't want to do that. It's a bad idea. This is a systems language, which means you should have the choice to do efficient things if you know what you're doing. You shouldn't be told you can't do that because you might get it wrong. Let me explain. The model for sharing and locking and all that stuff is like having a big piece of paper. We'll gather the paper and we make notes on it, but who owns what when, and we scribble and erase and rewrite and work on it, and so on. This model is you don't have one piece of paper, you've got lots of pieces of paper, and you write a message on a piece of paper and you give him that piece and he goes away, and you don't have it anymore. He'll bring it back to you if it's relevant to you, he won't if it's not. Because the individual pieces are broken up, there's no concern or worry about who owns what at any one time. It's whoever's got a hold of it at the moment. It just doesn't come up as an issue that for efficiency reasons I want to pass a pointer on the channel because once I've passed it, I forget it. And the only person who has it is the person whose business it is to deal with it right now. If you're worried about it and you really care, don't pass a pointer. Pass the value. Make it a channel of struct rather than a channel of pointer to struct. It will be a copy and there's no way you can share. But that's your decision, not the languages. The language makes it easy to do the right thing well, but it also makes it easy to make it efficient. And I think that's a really good point to end on. Thank you.