 Before we start, can I make sure everyone has their phones off? Our next speaker probably doesn't really need much introduction because a lot of people will know him from his work with Ubuntu and Genome and all that stuff, but I'll try it anyway. Today he's talking about something a little bit different, no JS, so I'd like to introduce Jeff Wall. This is worse than last night, except without drums. How's the volume at the back? Is the microphone all okay? Okay, good. I'm not going to live tweet my own talk. On my Twitter client, yeah. Well, I'm not going to bother with that because... So if you'd like to have a back channel that doesn't involve publishing all over Twitter, wait, no, damn it, I shouldn't have said that. There's chat.nojs.org. You can use a node service while you hear about node, while I present using a node-based presentation tool after I've tweeted about my talk on my node-based Twitter client, YoDog. This is my IP address, which is probably going to encourage all of the wrong kinds of behavior, but we're going to need it later anyway. I don't know, I think I assigned my IPv6 address to my flying car. So at one point, this was going to be HTML5 Web Apps with Node.js. Then I got some feedback from the paper's committee that perhaps I should speak more about Node.js because people didn't know what it was. They thought it was an actual JavaScript file, node.js. But no. So it's going to spend a very, very brief amount of time talking about Node.js and they get stuck into all of the stuff that I've been building with it. But based on some feedback from folks at the conference here, people want to know about Node.js itself rather than just my stuff. So, all about Node.js. Who's been reading the news about Stuxnet? Yes, yes, yes. The New York Times described it as the most sophisticated cyberweapon ever deployed. So that's how I describe everything now. It's bedtime. Let me bring out the most sophisticated cyberweapon ever deployed. So, the first thing is that Node is a JavaScript environment. If you've used Python, Ruby, and friends, you will not be unfamiliar with the REPL, the re-devaluate print loop. When you run Node on the command line, you get one of those and you run JavaScript. So, nothing at all unexpected. Operates to spec, does all the right things that you would imagine it does. So that's rather unsurprising. It runs JavaScript. Of course, you can do a hashbang script if you want. JavaScript on the command line. So, it's not in a browser. But the mission for Node.js is actually really quite interesting. And it almost comes before JavaScript. The JavaScript is a fancy part of the mission, but the idea behind Node.js almost comes before JavaScript itself. So, this is, as it was described by Ryan Dahl, the creator and maintainer, to provide a purely-evented, non-blocking infrastructure to script highly-concurrent programs. The operative words are purely-evented, non-blocking, and highly-concurrent. So, who's used tools that end up throwing in these buzzwords and are familiar with these concepts already? Okay, so that's maybe a third of the audience. So, that's good. That's a good start. I'm going to go into some background into why they matter and describe why Node is so asynchronous. To do so, I'm going to use a story about Bunny's Hamsters on a Hyperactive Squid, which was originally provided by Simon Willis. And I think it's an excellent, excellent analogy for why you would want to do a vented I.O., an asynchronous I.O. So, your web server is a Bunny. As I'm sure you're well aware. And you have a Happy Hamster come to your web server to request a resource. And this is an entirely comfortable relationship. One Bunny, one Hamster, one resource. They have the little transaction and they wander off. But when you have other Hamsters who have perhaps found your blog on, say, Hack and Use or Reddit or something like that, they come to your Bunny and they request your blog, like Sylvia's blog about her awesome video stuff with HTML5. And they all start queuing up and lining up for the resource. But the Bunny can only handle one resource at a time. It can only deal with one Hamster at a time. So, like a cranky barista. So, what we do with our web servers traditionally is we have many Bunnies, a family of Bunnies even, to service the Happy Hamsters. And as long as you have as many Bunnies as you have Hamsters, everything's fine. Then this happens. You're fetching a resource and perhaps it takes two seconds. So, we have a Hamster lining up. It doesn't happen very quickly. So, we start having a queue. Then we have impatient Hamsters and unhappy busy Bunnies. And they start behaving like McDonald's employees. Not McDonald's burgers employees. So, if you swapped your web server Bunny for a web server hyperactive squid with many tentacles who can madly service the Hamsters as quickly as they possibly can, you could solve this problem. I don't know if you can read the slide bit. It explains more of the detail down the bottom. But the squid can leap around doing as much as it possibly can to take the change here, provide the resource here, take the burger here, take a complaint here, leap backwards and forwards. So, this is what Node aims to avoid. Node is designed to be the hyperactive squid and to avoid all of these terrible I.O. problems and queuing problems that we have with stuff like Apache. So, this is a comparison between engine X and Apache. And along the side there you've got memory in megabytes. And down the bottom you've got the number of concurrent connections. The number of clients trying to access a resource on your server all at the same time. And you can see engine X down the bottom uses a pretty consistent amount of memory. I mean, it's hard to tell it's going up just a little bit because the Apache chart is so high. But it is actually going up a little bit. But it's handling the concurrent connections without using an enormous amount of RAM to do so. But because Apache has to provide a bunny for every hamster and bunnies have great memories, as I'm sure you know, they use a lot of memory, the memory just keeps going up and up and up and up and up. So, when you try to access a very popular post on someone's blog that they're hosting on a crappy... Oh, I shouldn't mention vendor's names, should I? Maybe they have a badly configured virtual host or something like that. They don't really know how to make Apache work very well to handle lots of load. Just cuts out. You can't handle any more. Piss off. Don't read my blog. Don't buy my product. All that sort of stuff. Because either you're using too much RAM or you're using too many connections. This is the request per second comparison between engine X and Apache. Down the bottom concurrent connections. It's the same. And then you've got request per second. So, at a huge amount of requests per second that you can handle, when you don't have many concurrent connections, I'm sorry this is sort of an inverted graph but it's meant to compare to the previous one. When you don't have too many concurrent connections, you can handle lots and lots and lots of request per second. But as you have more concurrent connections, the server's doing much more. There's more processes running. I can't quite do as much. So Apache doesn't compare particularly well to engine X there either. But you might say you can always configure Apache to have more processes running. You can increase your max clients so it can serve more happy hamsters. Or you can say it has MPMs. It has different implementations. So instead of having a whole Apache process per happy hamster, you can use a thread or a pool of threads or a pool of processes and threads. That's fine, but in Linux threads and processes are much of a muchness. There's a bit more efficiency being able to use threads, but they're pretty much the same in terms of resource usage. And of course PHP says no. If you would like to use an alternative MPM for Apache, you're screwed because thread safety is hard. Now it may sound like I'm teasing PHP people here because thread safety is difficult, but the great thing about PHP is basically they can depend on whatever the hell they want, any library that they want, and expose that to a PHP programmer, make it really easy. But all of those libraries underneath PHP, there's no guarantee that they're going to be thread safe, but really, really easy to use. Turns up in an Apache process, running your PHP stuff, uses the library underneath, serves up your blog, all that sort of stuff, and everything's fine. It would take an enormous amount of work to change PHP to be thread safe, and it would have taken an enormous amount of education of Rasmus before he started working on PHP in order for PHP to be thread safe from the start. Now this sounds slightly rude, but part of the goal of having an evented and asynchronous single-threaded system is to not have to care about thread safety at all as well. So Node is not thread safe, but it's designed to be used in a way that thread safety is never going to be a problem anyway. So we just avoid that whole problem. Computer science for the sake of avoiding computer science. So this is where I had my first experience with Nginx. It was great. This is the very early days of the new Crikey website from early last year, prior to the third of the month, at the end there, you can see all of the Apache open processors. 500 was the max, or it's about 450 or 470, something, max processors. So there's always some open. All those dips are lunchtime when lots of people go to read Crikey because they send out their email at lunchtime. And you can see that after the third, there's not much going on in Apache land. That's because Nginx was sitting in front of it. So there were a few requests going to Apache because instead of every single static file, so an image, JavaScript, CSS, you have 10 of these per PHP-generated page, all of those were being served by Nginx, and Apache was only serving the dynamic stuff. And it had something of an impact on the memory use. And if there's anything that is a very pleasurable thing for a sysadmin, it's consistency. And the inconsistency of the way that Apache would deal with things at lunchtime and the huge spikes could cause problems depending on their slightly different behavior, those sorts of things. Putting Nginx in front of Apache made things very, very consistent. Vastly less memory use, means you can have heaps more in the file cache, and you can spend all of your dynamic CPU chugging, RAM chugging, PHP calls on stuff that you care about. So Nginx is built to be an evented web server. It does run a number of processes underneath it, but each one of those is evented and asynchronous. What about this JavaScript crap, though? Why would we bother with Node.js if we've already got Nginx, which is all very nice and lovely? JavaScript was actually created precisely for this use case. Oddly not running on a server, of course, slightly different. It was created in a few days by Brendan Eich for one of the very, very early Netscape browsers. It ended up being used by Netscape for server-side JavaScript way, way, way back when, but used inappropriately and stupidly. So it never took off. But if you want to provide something like Nginx, heavily evented networking server, but you want it scriptable, JavaScript ends up being incredibly cool for it. It was designed from the very beginning to be a hosted scripted language or an embedded scripting language. So JavaScript always lived in the browser. That was the context in which it ran. You could provide other APIs to JavaScript from within the browser. It's always single-threaded. There's a single-thread JavaScript running these days, running per page, perhaps. But there was always a single thread hanging around. You didn't have multi-threaded stuff, anything like that. Nothing complicated for web developers. And it was based on an event loop. But instead of that being based on special library calls or framework calls, it was just part of the language. You'll know that there's no such thing as a useful sleep function in JavaScript. So anyone who uses PHP and then has to deal with JavaScript will think, oh, I can throw in a sleep call here and the browser won't do anything for a while. And then I can continue on later on. You can't. You have to run a thing called setTimeOut and then have a function that you call back. What's all this callback business? It's all very confusing for a PHP programmer. But that's because it's event-based, much more like if you were writing a desktop application in something like Q to a GTK or one of those other GUI platforms. So JavaScript makes event programming natural because it's part of the context of using JavaScript. So let's have a look at an example of using Node.js. This is the Node.js equivalent of Hello World. Start up an HTTP server and when you get a request, you spit out Hello World. So first thing indicates that we actually have modules and libraries in JavaScript now. One of the things that happened in the last little while was the common JS standard for defining modules. So instead of JavaScript just being stuck on its own and having to load 50 million files before you run your own, we have libraries and modules. First call there, HTTP create server is a bit of a shortcut which I'll come back to in a minute. But basically what you're giving it is a function that will run every single time your web server receives a connection. And it receives a request, you manipulate the response and then you finish off your response processing. So the first thing is writing ahead 200 successful, give it a content type text plane. And then response end is a shortcut for doing response write and then response end. So you can just dump your whole thing out in response end. And then with the server object that we're chaining down to with this listen, we're telling the server to listen on port 8000. So I will flick over to my terminal and run Hello Web. And do you remember my IP address? What's quiet? No, I didn't tell it to. Yeah, I mean that script is like, you know, print hello world and practically anything else. But yeah, you can just... So Node is an extraordinarily simple... I mean the idea is that it is like something like engine X except, you know, without all of the extra web server stuff. So it doesn't do logging by default. It doesn't know how to do logging by default. It only has very, very simple primitives. But I'll come back to that in a bit. So 169.222.9.158. Port 8000. Oh, good, thank you. Chat.nojs.org. So it should come back like this if you're using curl. You get the headers and it pops back like that. So nothing really surprising. You'll notice that there's two things, two headers there that I didn't provide myself. Connection and transfer encoding. Node automatically knows how to do chunk transfer encoding. It's HTTP server. We'll happily do that for you. You don't even have to worry about setting it up. And by default, it's HTTP 1.1. It'll happily do keep alive. You can configure it to close the connection if you want, but you don't have to. Here's the second example, and I will. This one is designed to show what happens when you pause during the server process. So it's basically the same thing. We send out hello world, except we send it out in pieces. We have a timeout after sending the hello. And after 2,000 milliseconds, it sends the response end and the number of connections that it's seen. Now can anyone notice a bug in this code already? I might come back to you in a moment. There's no loop? Well, we have an event loop because we're running in JavaScript. So as soon as you hit listen 8,000, you're getting a callback every time there's a connection. You would have to explicitly exit the loop to get out of it. So we don't need the main loop because JavaScript already has an event loop already. Ending the connection? Slightly confusing, although it was in the previous slide. Response end is a shortcut for response right. Then response end, but it's ending the response, not the connection. Bingo. So it's partly a JavaScripting thing, partly showing off what happens when you have a single threaded event-based system. We've got cons defined outside our function and outside the web server itself. So we want to have the number of connections countered independently of how many connections are coming in. So we define that outside. On the inside, we do cons++, so we increment cons. And then we printed after this set time out. The only problem is that if someone else requests this at the same time and increments the connections, we're using the same connections that's in the out-of-scope, and so two people will probably get something like connection number four. So the fixed-up version sets a variable within our function, within the closure to, and I've pre-incremented cons. So we get the number of connections internally to our function, and then when we print the connections, we print my connection, and it's all happy. It's actually the number of the connection that you have, not the number of connection of however many other people have connected on the outside. So if you had 100 people connect between incrementing cons and your two-second time-out, it'd be fine. You'd have the same number. So I'll just, I'll run that. If you do the same, if you do the same request, you'll get something like this, and then two seconds later, you'll get that. But no one will get the same number. So if you're doing it, call out your number. That was quick. Wild, True, Do, Curl. I welcome you to do Wild, True, Do, Curl. Sure, Apache Bench. 200, I was going to say a thousand. What? You just did? So yours wasn't particularly interesting, but I can see on my CPU graph in GNOME, it's looking slightly more interesting, so whoever did that. So my laptop is fully interactive and Italy using maybe 20% of the CPU, and this is a hunk of shit at them. So, oh, unless, unless, that's how awesome that was. So I'll see if you guys can abuse that any further. It's being me the sysadmin? No, I would not do that on a production server. So it's just diet again, has it? Nice. Yeah, no, my CPU is telling me that something is going on. So I think what's happening is that someone's trying to do a number of concurrent connections, and I don't have enough file descriptors to handle that many at the same time. But if you do like, you know, 5,000 of your own, I think 5,000, what's the default year limit for file descriptors? 1024. Really? Why is my laptop configured like that? Come on. It is rarely asked to do that kind of work. Anyway. So yes, I'm getting this interesting graph of my CPU going up and down like this. Was that on the Arnet sheet with all those hints and tips? No. Yeah, I don't tend to run into this on my netbook. People do go bananas for Node.js. It's a little bit like Ruby was about what, four years ago? Was it four, five? I know it was big in Japan, and then... No, Ruby was big in Japan, and then Rails went crazy. So it is a little bit like that early Rails thing where people have a religious experience and then go completely nuts and angry way, or a nice way. Not really. I'll get to some of the things that people have built on top of it. So this is the stack diagram. Part of the reason why people are very keen on Node.js is that it's built with really, really good stuff. And I find this one of the most inspiring things about great open-source projects. They take existing really, really, really shit-hot projects, put them together into something really quite new, or a new take on an old theme, and put it together in such a way that you've got an incredible project. So down the bottom left-hand corner, you've got LibEV, which is a much nicer, much more up-to-date version of LibEvent, which is essentially just an event loop library, and it will happily do select, ePoll, KQ, et cetera, et cetera, quantum event loops, I'm sure, at some point. LibEIO, which is kind of a cousin of LibEV, it does asynchronous IO, and so it does thread-pool stuff so that you can do asynchronous IO. It doesn't actually support asynchronous IO on Linux using the AIO APIs, but maybe at some point. CRES is an asynchronous DNS library, so you're very likely to be doing DNS things in your server, and so you've got an asynchronous client. HTTP parser was written by Ryan, who wrote Node.js. Basically, he wanted a really, really, really fast HTTP parser. So he wrote one in C, and the goal is correctness and speed, and it's great! And there's always a black sheep in the family, so there's open SSL. Originally, he was using GNU TLS, but I don't even think it's a sheep. And then on top of that, you have a very thin layer of C, C++ to integrate these bits together as a JavaScript bound into V8, which is the JavaScript VM that the Google dudes built for Chrome and Chromium. So it's incredibly fast and getting faster all the time. And then Node.js is some nice JavaScript stuff on top of the very, very slim C, C++ bindings to V8. So there's actually not that much. Originally, it was this got-off or blob of C and C++, and basically all of the APIs that you used internally were C and C++, but now it's a tiny, tiny, tiny wrapper, and the rest is JavaScript on top. So it's actually really easy to hack. Every now and then, you do have to dive down into C++, which is sad, but, you know, we suffer that. So beautiful pieces, well put together. One of the best things for a new project is to have a kick-ass maintainer. Ryan is the quiet, retiring, very humble style of maintainer. He's more a tringe than a linus. And he's actually so quiet that when he pops up on the mailing list, he will say something and it will have the weight of the BDFL statement simply because he doesn't say stuff very often. When he does, that's it. Rock'n'roll. So it's pretty good. The only problem with that is that the Node community, because it has exploded so quickly, is a little bit unruly, and it's gone slightly crazy. So there's some lieutenants kind of popping up that are doing bondage and discipline around Ryan's very humble and caring approach. So he's really, really cool. It was funny the other day, Havoc Pennington, one of the very early GNOME contributors, popped up on the mailing list because he's doing some cool stuff with JavaScript at the moment, and he had various comments about Node, and Ryan responded and said, oh, you're one of the people who got me into open source. I had your book in high school, and so there was a little bit of a Havoc Ryan loving. That was nice. The entire stack is evented. There's been attempts at doing this sort of stuff before, but with all sorts of exceptions and tricks and hidden rules and gotchas and stuff like that. With Node.js, you're given a JavaScript environment. You know that everything is evented. There are a couple of blocking calls if you want to use them, but you don't have to use them at all. The only reason people tend to use those is just to make things like shell script easier, JavaScript-based shell scripts easier, stuff like that. So there's no gotchas, which is an incredible achievement. In addition to just the basic JavaScript, you've got HTTP server, HTTP client, DNS, TCP, Unix sockets, all really, really very simple, straightforward network programming primitives, and they don't do too much magic behind the scenes. The HTTP server is a little bit of a standout that it does tricky things like handling HTTP 1.1 stuff for you, but that's good because all that sort of stuff is hard and you don't want people putting shitty web servers out there just because they're mucking around. So that's great. You actually feel like you're very, very close to the metal, despite being a JavaScript VM. It's also really, really well timed. People have tried doing this before, heard a very interesting story from Andrew Bartley yesterday about Tridge, four years ago, got a B in his bonnet about embedded JavaScript in SAMBA as a scripting environment within the SAMBA. They used a thing called EJS, embedded JavaScript, and Tridge was very, very set on making this work and it seemed like all of the other developers of SAMBA were saying, Python, you'll be great. So it took them a while to convince him but they ended up going into Python land. But they were way ahead of the curve, four years ago for an embedded JavaScript as part of a networking service is just madness. Since then, a whole bunch of things have happened that have made it so that you can actually do this stuff easily. One of the most important one is common JS modules, so essentially libraries and namespacing for JavaScript. For random libraries. So all of the node libraries and modules that you can install all use common JS. So it's really easy to use. Back then they didn't have that. So there was no such thing as, you know, load me a library. How are they going to do serious scripting without that? So, and then since then, of course, the browser performance war instead of having to use something like Rhino implemented in Java, SpiderMonkey, TraceMonkey and the whole family from Mozilla. We now have V8 which is incredibly fast. There's also a project called Narwhal which uses TraceMonkey. But it seems that node has got the buzz and the interest and all of the right kinds of bits and pieces in the right place. And it's nice having all of these web browser projects fighting each other tooth and nail to get the fastest JavaScript implementation. And it ends up helping our little server project. It's cool. There are some limitations. It really only runs on X86. It's X64 and ARM which, if you look at those platforms, it's pretty much what Google deploys to. Android, X86 and servers. And there's been people doing attempts at porting V8 to other things but they haven't really been particularly successful. The ARM one is really interesting because it means you can run Node.js on mobile devices. The 02X branch has really, really shitty SSL and TLS support. It's hard to do asynchronous TLS. SAM and people have gone through all of this trouble as well. So I'm actually going to point Ryan in the direction of Tridge and see what happens when you put those two in a room other than an explosion to see if they can help. But the 03X branch has new client server implementations. It's apparently pretty good. It seems to work from basic testing, but I'm not actually running anything production on 03 yet. It was pretty fine for a while, but then, you know, new versions of V8 started dropping in and totally new APIs and then you stand back and let the crazy happen. There is a best effort. Windows port, but no one really gives a fuck. But as Node becomes more interesting, people who care about other platforms get more interested and unless you give them a very strong whiff of you're not a real human being go away, they'll probably try and port your software to their platform. And the interesting thing is that Windows actually has really, really, really kick-ass asynchronous networking APIs. So Node on Windows could be a match made in heaven. For some people. There's also this one which is actually really annoying. In recent V8 versions, this has been cranked up from 1 gigabyte to 1.9 gigabytes. This is a real problem if you're doing anything significant with no JS server. The good thing is that hopefully you're mostly using Node as glue for other services, like you might be using Couch or Redis or whatever. It's a magic number that is apparently insidiously plastered throughout the V8 code base. I don't think it's so much the number, I think, but there's huge dependencies on how all of that stuff work in V8. So they've pushed it from 1 gig to 1.9, but I think much more work is going to have to happen to go beyond that. I think there's some interesting work starting to happen on that with the 3.x versions of V8. But that hasn't filtered back down into anything happening in Node. So that's kind of a bummer, but the good thing is that there's so much interest in the terribly misnamed world of no SQL services who will happily store things in memory for you as a network service. So you don't actually have to use your JavaScript memory to store it. The other thing too is that Node has buffer objects so that you can store binary data and you can essentially do mem copies from JavaScript and buffer objects don't take up any of that heap limit. So hopefully you can shove stuff into buffers or shove it into CouchDB, Redis, or if you have to, Mongo or something like that. There are alternatives, so if you're more familiar with other platforms than you are with JavaScript or you have that JavaScript, there's vomit in the back of my throat feeling, which I can understand, but one day you'll get over it. Twisted for Python, basically when I explain Node.js to people in a very short period of time and they're Python people, I say it's like twisted. It's basically twisted and JavaScript all mashed together except without really adiotic function names and object names and all that sort of stuff. So that's a relief. The comment was that Twisted's call-back mechanisms are more rich than JavaScripts. They very much are. And the way that you interact with Twisted is essentially like a framework or not so much like a library, more like a framework, kind of takes over your world. But yes, it is a lot more rich. JavaScript has its call-back and that's it. You can do all sorts of fancy things with closes and functions and stuff like that in JavaScript, but that's actually the inspiration for quite a lot of arguments on the Node mailing list at the moment. There's a somewhat famous thread now on the Node mailing list. Some guy says, I love Async, but I can't code like this. And it's basically, please change JavaScript so that I don't have to use callbacks. You might be looking for another project in another mailing list. Event Machine for Ruby is essentially, it's very much like Twisted, but for Ruby it provides a framework for doing Async stuff on top of Ruby. Any event for Pearl, for those of you using it this millennium. And for Java folk, NIO2 has a bunch of Async stuff and that is ridiculously fast and really, really cool, although you do have to write very verbose code. And then there's Erlang, which is all very exciting. But if you don't feel like learning Erlang and using something that's robust and has been pushing your mobile phone, oh, good Lord, pushing your mobile phone calls around for years and years and years and years and is basically bulletproof, you might like JavaScript. I actually have one minute to go. Is that including questions? Five minutes of questions, okay. Now we find out who's more important, me or you. I'll just briefly go through this. It's great for using simple scripts. A really good example are things that you have on the web that you want to run on the command line. There's a great tool called UglifyJS which compresses and munges your JavaScript down. It's much faster than Google's Java-based Closure compiler and it manages to do all sorts of extra fancy things written in JavaScript. And so you can run that on the command line in a make file when you're building your web project. Web services, it's fantastic for that. You throw hundreds of connections a second at a node and it doesn't have a problem, so you can do heaps of really, really cool rest stuff. Sorry. So, I thought I already made comments about Java. Really good for high-performance glue in a network environment. Real-time web, so web sockets is a huge thing for node and highly concurrent non-HDP networking. You have a unique socket and you want to do crazy shit with it and you have UDP, TCP, whatever. You don't have to be doing HTTP for node. I might just go straight to questions, but I'll go through these slides as I'm doing questions. I will repeat questions as well, so first we'll do the my. Right. Hi. So, in one of your examples, you sort of had the response object and you're writing to it. Presumably it offers that text which you have told to write or write it when the socket can accept it. Did everyone hear that? If you're going to be writing a lot of text, you're going to want to wait until that first block of data has been sent. You didn't actually have anything in those things for waiting for completion of IO. So, my first answer is Christ, no. My second answer is there is no buffering in your server responses. Buffering is something that you should do if you want to do it. Straightaway. That's why I ran curl-nsi, because when you do write head, that's your headers, the very first time you write to the response, it sends the headers in the first chunk of whatever you wrote, and then every write after that, it sends straightaway. Are you saying that they actually block blocks? No, Lord, no. Nothing blocks. Well underneath, but the effect of what Node tries to do is to push it out immediately, because it's designed to do long polling, common, all that kind of stuff, and if you want to do buffering, that's your problem on top of it, because it's a very simple HTTP primitive. First off, as a historical note, there was a server-side JavaScript on top of Rhino from 2,000 mile-long, big production sites, but no stress, no one ever knows about it. Yeah, Netscape did it themselves, but it was shit. You mentioned about Havoc's comments, and I guess you've read his proposal on an actor-based sequential way of doing the same kind of async work. I was wondering, like, do you think that would be useful having, I guess, used Node a bit? Do you find that the whole anonymous function, spaghetti code, is a bit of a problem in terms of debugging? I don't. I think the key is, once you get the Zen of JavaScript, because it's a bit of Zen to get, you find alternative ways of going about things. Some people write their own, you know, step functions and all that sort of stuff to avoid doing asynchronous stuff, but if you don't keep calling callbacks, callbacks, callbacks, anonymous functions, you know, turtles all the way down, you don't have that problem. But I think people who start off and are not quite sure of how they should do it and sort of read examples in line, they just see, like that, and they do that. Yeah. So this is something I was going to refer to a little bit later in the talk. Node being single-threaded, it doesn't help you with scaling up multiple calls. If you want to do that and you don't like JavaScript, go for Erlang. But if you do like JavaScript and you want to play with Node and you want to scale up to multiple calls, one thing is you can just shell out other nodes and give them a file descriptor and do message passing yourself. Or you can use the HTML5 Web Workers API to do that for you, and you can do message passing of JSON and whatever between Node processes. So there's a lot of really, really cool stuff coming down from HTML5 that ends up being repurposed for Node. Another really good example is the Node Canvas library, which is actually a binding for Cairo in Node, but it's exposed as the HTML5 Canvas 2D API. It's very sexy stuff. But yeah, basically, you start a bunch of processes and at the moment you have to deal with it yourself, and Ryan wants to make it so that you don't have to, eventually. Sorry. You just asked the questions I was going to ask. I just wanted to point it out in your list of alternatives. There was one glaring omission, and that is go, because go actually allows you to write in an asynchronous style or use concurrency primitives to write in using co-routines, which so you avoid all that callback help, and it takes advantage of multiple calls and it has a lightweight syntax that's smaller and simpler than JavaScript. So if you want to use something terribly new, terribly unproven that isn't JavaScript and isn't an entirely new language that you have to be familiar with, you can use go. I'm kidding. It's worth experimenting with. I have a go tutorial tomorrow. Come along. It is actually really cool to see all of this stuff filtering down into real systems languages, like go rather than up in JavaScript land, although I think JavaScript will do with it. Just do a couple more seconds. I can just repeat the question if that's faster. Is V8 and the like now an actual library, or is it kind of still just copy it into Source Tree and kind of hope that nothing breaks when you upgrade it? V8 has been a library for ages. The Chrome team don't know what libraries are, so they copy everything into their Source Tree. So for instance, if you install Node on Debian, it depends on the V8. Yeah, we've been had a back thought to put JavaScript as a server-side scripting language in the database basically to reduce the time you hold locks. The other thing I just comment on, single-threaded versus multi-thing, if you design it so it's only going to work in multi-threaded, that still means single machines. It's still not going to scale. What really do run multiple Node.js, one for each CPU core and load balance between them, and then you add a Z-series to the problem, and then... But many machines. Far away. I can just repeat the question. Where does HTML5 come in? That was the stuff that I didn't focus on because I figured based on all of the questions that I was receiving, people were much more interested in Node itself. But there's some cool HTML5 stuff that's worth looking at. Soccer.io implements web sockets, but it also implements every possible way you could get something approaching a bi-directional polling connection like a web socket, and it will do all sorts of awkward, crazy shit to help you do so. But hopefully you're using a web socket. And so it's web sockets, HTML5, cool stuff. These are games. Canvas 2D. You get the Canvas 2D API in Node. WebGL bindings for OpenGL, so you can write an OpenGL program directly in it. WebOS 2.0, which is an HTML5-based mobile platform from the ground up. It's not Java shit. It uses Node as its services layer in WebOS 2. So you can even provide JavaScript service for your application that will just run in the central Node.js server on WebOS 2. So that's very cool. That's the palm thing, yeah. Shitty hardware, awesome software. One more question? I made that WebGL. What do you think of it? You're the maintainer of Node WebGL? Yeah, that's me. What do you think of it? And what do you think of languages like CoffeeScript going on top of Node.js? Right. Well, I'll talk to you about your project later. It's pretty cool, although the OpenGL stuff is a little bit unstable. So that has some impact on whether your WebGL API is good. The CoffeeScript is a really cute thing. It's basically an unholy mix between Python and JavaScript. And it compiles down to JavaScript. And some people find it an easier way to grasp crazy JavaScript closure stuff. The cute thing in Node is that you can actually run CoffeeScript almost natively because it will... You can load a CoffeeScript file and Node will happily compile in the background for you and run it. It's cool. I'm not all that keen on it myself. I don't use it. But there's heaps of cool Node libraries that you don't even have to care about it. But it turns out they're implemented in CoffeeScript. So they're very cool. Thank you. Just got to present you with a gift from Linux.com. Thank you.