 I'm going to say something I don't often say, which is good morning. My body still actually thinks it's Thursday afternoon, and I'm still in the UK. And none of the beer that I drank since I arrived in New York has had any effect whatsoever. But I'm going to be in room to put these glasses on, cos that is a very bright halogen lamp. It's making me feel a little sick. When I first put in the proposal for this presentation, I hope to stand up here and tell you lots and lots of exciting things a fuddo i ychydig yn ystyried'r mewn cymole i'r mwyaf a'r cyfosedau a oedol. Y dychydig wrth ychydig yn ymweld y ddweud yn oedol, mae'r fforddau o ran y ddweud yn ystod yn gweithio. Mae'n ysgwrdd o'r gwybod chi'n ddysgionio da o'r gweithio gwirio i ddweud i ddweud a bobl yn ddechrau i fod yn ddweud i ddweud i ddweud i ddweud i ddweud o'r ddweud. Dwi'n mynd i wneud eisiau gwahodd agos, ond ydy'r ddyn nhw wedi gwneud i gweithio'r ddau, a'i ddweud i gweithio'r ddau. Felly, dyna'r ddweud yw hi'n ddweud i gweithio. Dyna ydych chi, rwy'n gweithio'r rôl. Rwy'n gweithio'r ddau, ond roi'n gweithio arall o'r tyfnwch i chi yn rhywbeth. Gweithio'r gyda'r unig, yn gweithio'r ddau. Rwy'n gweithio i ddim yn gweithio'r ddau, Y nifer y tîm. Felly byddwn yn mynd yn ymddangos ar y dyfodol ychydig i'r gweithio ystafell. A ydych chi'n ddau'r ffordd yn ein gweithio'r pwysig yn y ffordd. Dyn ni'n gweld ychydig ar y pwyllgor i'r pwysig a'r ysgrifarchion... Yn y pwyllgor i'r pwyllgor, If anybody at any point wants to ask a question, just stick your hand up and I would do my best to notice you because this isn't really a planned talk at all. Anyway, I'll start this little thing off. I always put a bioing of things it helps people realise there's a human being behind all this. About three years ago Ie dweud y dylai chi'n bwysig o'r dymian nôl. Felly mae'r ddododdau cyntaf wedi'u cyflawn i ddweud. Yn hi'n dweud fydd ei ddweud y dylai. Mae'r ddoddau i ddweud o'r ddoddau. Mae'r ddoddau i ddweud o'r dylai'r ddweud. A dyna'r ddweud i chi'n ddweud wedi'u ddweud yn y ddwygau, ac mae'n ddweud i ddweud yng nghymwysgau unig. Fyd dim yw'r ddweud. Felly, ein ysbytio ar hyn y dyfodol,, mae'n ystynnu grannu yn rhan i'i gweithio, mae'n gweithio i unrhyw gwaith, o rherwydd mae'n gweithio i unrhyw gwaith, mae mae'n gweithio i maeth, mae'n gweithio i unrhyw gwaith i hynny. mae'n eich gwleidio iddyn ni wneud o poel chi atweud. Croed wedi datblygu rood y gwaith ychydigeb, byddwn gwneud lemon pan isbach felly mae wedi'i gweithio ymlaen i chi, byddwn gwneud eich gwaith ymlaen i chi. I like Ruby, it takes no effort to code in Ruby, at least not compared to Java for example. So I started doing all these madly experiments and at first it was just sort of standard stuff, just using basic stuff that Ruby provides like process forking and things like that. But as time went on I couldn't help but wonder how it was all implemented under the hood. It's a very bad habit comes from the fact that I have a background in embedded systems and I'm actually used to writing everything that is under the hood. So I started messing around inside MRI and then I started doing these oddly experiments with using system calls. And one of the things I sort of discovered doing this is that most people don't think of Ruby as a systems programming language. It's actually pretty good for it. You hear a lot of complaints about how runtime is too slow. There are things that Python will do faster. But when you're actually talking to an operating system to get it to do things the limit on how fast you go isn't normally your process, it isn't your runtime. It's actually the operating system. And a lot of people have complained about the lack of native threading that we used to have and that we now sort of have fixed. But if you're running on a multi-core box, actually I don't want to think about threading. Fine, if I'm running with sort of like two cores, okay, my brain can get bound out. But if somebody sticks a son Niagara in the corner of the room and I've got 16 cores my brain can't get bound out. So I started thinking there is a way of... I don't care how fast the implementation is. I just care that it plays nicely with these features. Did it allow me to get at the operating system but still write all of the logic in Ruby? Lo and behold, you can. How far have I gone too over here? Ah, it's explaining the philosophy of Unix. Now, it's a bit hypocritical of me because actually I hate Unix. My love affair with, or rather my hate affair with Unix, it's actually started in 1988 when I met my first ever Unix box. And I couldn't get past the fact that everything started like two letter commands. And you think, well, what's the deal with that? But the thing is, if you actually chuck out what most people think of as Unix, the shell, and actually get into what's the internals of it, it's a very nice, light, effective operating system. Most of which revolves around the basic premise that you just shouldn't repeat yourself. Build little things, build them well, don't repeat yourself. Well, we have similar philosophy in the Ruby world. We like to build tools that are well designed for what they do, or as well designed as they can be within the time constraints of getting them out of the door. And we like them to be as dry as possible. There's a natural marriage. I'm not going to get into the whole testing philosophy or any of that, because that really doesn't interest me very much anymore. But there's such a shared mindset between Rob Pike and the rest of the early Unix crowd, and what we as Ruby developers actually like to do when we code, that it's just a good fit, it's a good mind fit. Sorry, this is definitely a ramble, so I'm going to wake up. Now, what Unix consists of is more than anything else, it's a very small operating system kernel. It doesn't matter how it's implemented, because micro kernels, monolithic kernels, all of this, this is stuff that people in the sea world care about, and we'll argue about ad nauseam. All that really matters to us is it provides a very small number of facilities, file descriptors, originally the principle was everything is a file. So if you want to share memory between two processes, effectively you end up with a file handle. It's just like sharing a file. So this allows you to write code that's actually very simple, because the vast majority of stuff you can actually do with open. You can wrap up nearly everything you are going to use on a daily basis inside an IO object. Now that's not true anymore, because we've got new features like kernel-level eventing that actually mean we need to do a bit more than just call open. We have to call their own versions of open. But this makes it a very flexible operating system to just muck about with. I should probably give a shout out to where I actually got this hangover from, which was a bar down on the boundary, I think, called Lolita's, and which main distinguishing point seems to be that it serves pale ale, which is something you can barely get in England anymore. But if you imagine that we've got this lovely operating system that's just going to give us file descriptors, and we've got complicated tasks that we want to solve that we'd really like to get multi-core with, what's really nice about Unix is it gives us a very simple way with these file descriptors to do it. Every process can effectively just be sitting there with its own handle. They can be talking to each other. But there are downsides to the way in which Unix actually implements all of this, unfortunately. For one thing, it's of the opinion that all processes are inherited from a single process. So when you boot up a Linux box, a Mac, it creates this init process. And it's got this lovely idea that there is this beautiful idea that you just copy the whole state of that process into a new process to create a new one. This means that the more complicated your processes get, when they create a new child process, the bigger that process is as well. At 99% of the time, we actually end up chucking all of that away because we decided to run a completely different programme in the process. Now, with Ruby, I'm going to quote a number, and it's a really strange one. On an iMac G5, using Ruby 1.83, its memory footprint is 1.87 megabytes, just for the interpreter loaded. And I know this because I wrote a very bad script once, purely because I didn't know what I was doing at the time. And it decided to spawn 543 of them, which unfortunately made the Mac decide that it wasn't going to do anything else. I spent ages trying to look up a particular error code, which I think was something like minus 61,007 or something. And I couldn't ever find a description of it, but I think it basically was along the lines of, go away, the kernel doesn't know how to schedule these menu processes. But 1.87 meg actually isn't a lot when you bear in mind that most modern implementations of Unix, they actually don't copy most of that. In fact, they don't copy any of it. Nearly all of the system calls for creating a new process will instead just use a copy-on-write mechanism because you've got virtual memory pages and you don't have to copy them until they get dirty. This means that you can quite happily, incidentally, that was 1 gig of RAM that all of those processes fitted into quite nicely. So this means that you can quite happily schedule several hundred Ruby processes. And each one of them, you can do whatever you feel like in them. I was actually doing something really useless to do with certs because unfortunately a lot of my work at the time involved playing around with certs. But for quite a lot of problems, you're basically in a position where you can look at a Mac-produce-style solution just by spawning off a whole pile of Ruby processes that are just bare processes and then running up whatever you want in them. I'm not sure that most people actually do do that because my experience of deployed Ruby apps is mostly deployed Rails apps. It seems to be you can get about 10 Rails apps to a gigabyte. There's a lot of weight in Rails so it's not surprising and there's an awful lot of process data that has to be copied. But to go down this process route, Ruby actually provides you with some really lovely facilities all baked straight in. There's kernel support for just spawning off processes that you don't care what they do so if you've got a background IO job you can just chuck it out there and forget about it. It also gives you the facility to actually start up processes where you've got a nice pipe connecting the two of them so that you do care about the results and you can sit there and you can wait on them. But where it starts to fall down a bit is where you want to actually do non-blocking IO. Now we've got lots of non-blocking IO calls that have been introduced over the last couple of years and they sort of alleviate some of the problem if you're interested in one file or one socket. But you still tend to end up if you're going to write network coding Ruby, basically sitting in selects where you've got to explicitly give them timeouts. The funny thing about select is it's not actually a blocking call technically because under the hood Ruby doesn't actually block. It pulls and then it pulls and then it pulls some more and then it says the timeouts up and it comes back. So it's actually implemented with non-blocking IO. It's quite amusing. That's really not an efficient way of writing a server. And how many people here use Nginx for something? Right. If you go to the Nginx website, in fact there's a link at the end of this presentation for it, there's a link through to an article. There's basically how to get C code to do 10,000 connections. And it's quite fascinating because all of the techniques that are in that, you can do in Ruby, up until the point that you start to hit kernel eventing, which you can't currently do in pure Ruby. But the main trick is to get away from this blocking element. Now, part of what I'd really like to talk about today and the trouble is I know that this would have involved a lot of code. And normally, well, I've had bad responses in the past in the fact that many of my presentations have like 20 pages of code in them because I tend to find that writing the code is a lot more fun than the talking bit. You have to actually go through certain processes. For one thing, you've got to actually get down at the machine. And there's only two ways to do that in Ruby as it ships. You've got the syscall interface, which is possibly the most dumb-headed, unfriendly, useless way of making a system call imaginable because it won't actually give you back any result except whether or not it had an error code. And anybody who comes from C and is used to using syscall, where you just dump in a buffer and you get back some results, get very frustrated very quickly. But there's something else that standard MRI ships that most people just don't seem to take advantage of, and that's Ruby DL. And this is just a wrapper for dynamic link libraries. It works on Windows. It works on Unix. In fact, I could probably have spoken about Windows instead of Unix today on that particular point. And it's great. I mean, Gregory mentioned Ruby FFI. And I really like where Ruby FFI is going because most languages have pretty good support for FFI. And so if you want to use Fortran code from Ruby, that's the way to go because it's just going to be clean. But the thing is Ruby DL actually ships out of the box and it's kind of ugly but it's there. And sort of ugly children. Well, they're still quite lovely in their own strange way. And Ruby DL is actually an amazing tool because it allows you to do the one thing that I always envy from C code when I've been doing a lot of C code. Play with memory pointers. You can get memory pointers in and out of Ruby in various complicated ways. Mostly by writing your own extensions in C and passing back the pointer in a string or something and then mucking about with it using a ray pack and string unpack and all that nonsense. But Ruby DL actually just allows you to get at it. It says, here, have a pointer. Oh, by the way, I'll give you a managed pointer. I'll actually take care of freeing it up for you and read with it. And you know what? I'll leave doing that until I do garbage collection. Which I quite like actually. I suppose it's the one thing that I think would be nice about .NET if I could get round everything else that's involved in learning .NET. And because the interface is common to both Windows and Unix, you can write some very cross-platform code this way that's just for doing all those things that C programmers do all the time to do with memory buffers, getting straight into the operating system Windows, for the simple reason that it doesn't have stable... In my experience, it doesn't have stable system call numbers. That came as a bit of a surprise the first time I wrote some code that used it that suddenly, oh, by the way, no. On 2003, this doesn't create a new process. And what did you think you were doing? Actually trying to create a process from scratch, you idiot. I mean Unix doesn't have stable system codes in the system codes for free BSD, the system codes for macOS 10, they're not the same, except occasionally by coincidence. And it's quite a pain in the arse when you're active if you don't mind using that expression. It's quite a pain in the arse actually coding a lot of stuff that uses the sys calling interface on Unix just because if you want to support more than one Unix you've got to keep big tables of all the different system code numbers that actually map across. Ruby DL just cuts straight through that. You can just load the C run timing. Once you've got the C run timing, it's great. Every single C function in the run time, you can call. On most Unix boxes, that means every single sys call that's wrapped by C. Suddenly you've got the whole operating system sat there doing what you want. All you've got to do is do some very, very basic pointer math occasionally. The most obvious example of that is if you're using memory mapped files. If you map a chunk of memory in, you are going to be responsible for figuring out where you're going to put data structures in it. You are going to be responsible for munging them and unmunging them. It's just a very liberating experience to know that you can write system level code without having to go down into constant pointer math without having to go constantly into C. You can very easily go from Ruby to memory mapped files. Why is everybody using memory cache D? Well, why if Ruby can get a memory mapped file is everybody using memcached? I don't know. To be honest, I only started looking at using memory mapped files in Ruby about three months ago because somebody's actually released a local memcached extension that just runs memcached, or at least a memcached protocol on a shared file. That's written in C. At the time I thought to myself, I could rewrite this in Ruby. Most of what I'm actually interested in on a daily basis is how to get Ruby to act more like event machine without having to include event machine. Because a lot of my work, it's all blue sky research stuff. It's not the sort of stuff people should use on production servers. But a lot of it is to do with getting lots and lots of different network interactivity going with minimal weight. When I started looking through the code for local memcached, I thought, this could actually all just be done from Ruby. Because the only thing that requires you to use a C extension is that if you do a syscall to do a memmap, it's going to give back a pointer, but from the core libraries, there's no way you can do anything with that pointer. You can't reference into the memory space. But Ruby DL, because the fact it actually gives you direct access to pointers and it allows you to just use effectively in a way use them as an IO stream, I guess, it would allow you to just map in the memory file. Suddenly the whole need to have this extension goes from, it's a C extension, I think it's actually a C++ extension. I'm not quite sure why somebody would want the C++ run time overhead on top of everything else. But instead of that, you can just go, I'm just going to memory map that portion of shared memory in. And shared memory itself, it's trivially easy on a Unix box to create shared memory. It's also trivially easy to do it badly. There was a disclaimer at the start of this. I always put a disclaimer on because 99% of the things I actually get paid to do are things you should never do. Or that nobody knows if they should be done yet at all. But it's indicative of a difference in attitude more than anything else. There's a common attitude when people write Ruby extensions that I want to get at this low-level functionality I've got to turn to C. Trouble is C code in my experience is five to ten times more verbose and probably a hundred times more error-prone. I don't care how good a programme you are, nobody gets point of math right every time. And I started my career in doing aviation systems, cockpit control systems. And there was always a pressure in that industry between the people who wanted to come in and use efficient tools like C. And I don't mean efficient in runtime sense here, I mean efficient in time spent developing. And the fact that an awful lot of the kit we actually built, it was assembler only. It was the only way you could actually go through and validate everything. And the very first thing you ever do on any of those projects is you effectively write managed memory access. So you don't ever have to do point of math again. But there's a common attitude that if you want to get Ruby doing anything that's outside the norm, turn to C. If you want to get it doing it fast, turn to C. I think in many ways it's part of the fact our industry in general isn't adjusting very well to multi-core. It's quite fascinating because I sort of dip in and out of the various things that come out of Intel on how to take best advantage of multi-core boxes and how to use threading libraries so that they take advantage of multi-core. And the main thing I think every time I read any of this stuff is how am I not going to get this wrong? How am I going to make it so it will work on other processes, et cetera, et cetera? The great thing about an operating system is somebody else's problem to fix that. There are an awful lot of anally retentive Linux hackers out there who will spend the hours necessary to make multi-core work really nicely. I don't have to do it because by myself I'm not going to do it well. I'm going to get bored, distracted, work on something more interesting on the project. I'm not going to be able to justify to the person paying for it why we're going to go and do this, which to be honest with an awful lot of low level stuff is always a problem. People just will not pay for it. They'll say, oh, well, we want it. We just won't pay for it. You've got to do that on your own time. And we're not adjusting well to multi-core, but the thing is we don't have to adjust to multi-core if we just think in terms of process and pipeline and multiple pipelines. The proof that that's a better way to do stuff is that if you actually ever go over and talk to anybody who works in high-performance computing or you go and talk to people who design graphics cards, stream processing, it's all they care about. They want to get these things, single streams as efficiently as possible. In fact, last five, ten years everybody's been obsessed with unified shaders, unified this, unified the other because the thing is the more that you just get pipeline, you know that you load on the front of the pipeline and what comes out of the back of the pipeline is what it's supposed to be. And those pipelines don't very often have to communicate with each other. A lot of work has actually gone into various parts of Unix to make sure that those pipelines can communicate with each other. Most common example is FIFOs or named pipes, which basically just allow you to pretend that there's a point in the file system, except it's not in the file system because obviously it would be hideous and efficient if you had a file system to do it. But you can basically just create something that appears to live in a path that anything can access. No one's going to tell you off if 20, 30 different processes are all accessing this one pipe. Whereas if you've got a relationship between parent and child, you'd never have more than the two processes going. It allows you to look very differently at scalability problems for one thing because the reason that our high performance crowd prefer to go in streams and pipelines is that you can just scale ad nauseam. You just put extra pipelines across. If you've got 20,000 copies of the same thing that need doing, but they're distinct copies, fine. 20,000 instances of a pipeline. It's the sort of generalisation that a lot of the time, because we're working on, if you're working on Wales projects, which I do occasionally do, I'm not quite sure why because I really hate Wales, which I've been told off before. I've said that at several Wales conferences. But I really hate it because it doesn't think the way I think. It's the principle of least surprise for David Hannah Meyer Hansen. It's not the principle of least surprise for me. But if you work on Wales projects, the deadline pressures are often so tight that you can only solve a little problem now. And then another little problem. Test driven development works so well for Wales projects is because you're biting off little bites of the cherry every single time. And the thinking in this more generalised sense of what process is doesn't... There isn't the commercial time to do it a lot of the time. But an awful lot of the scalability issues that we actually run into with large websites can equally be solved by thinking in terms of these distinct pipelines. And some of the earliest experiments I did with mucking around with using the fork system call, I was working for a financial company in London at the time. My one time working for a financial company and it didn't go at all well. I lasted three and a half weeks, which was long enough for me to realise that, A, I really don't care about lead generation. That, B, they cared more about lead generation than they did about giving me the time to actually solve problems. And C, I couldn't justify taking that much money off them for basically sitting on my ars. But they wanted to know, well, say we use a kernel level fork to fork off several hundred rails processes. What's going to happen? And the guy who was actually the CTO, he loves Ruby, he loves Unix. His desk only had two books on it. A copy of Programming Ruby, second edition. And a copy of... I can't remember what it's actually called, but it's a Unix kernel book that's got a lightsaber on the front. He really did believe himself to be a kernel Jedi. And he lived the active lifestyle to suit it. It was quite funny. But I thought, well, you'd think that actually there'd be quite a benefit of actually using a system level call for that sort of thing. Because once you look inside Ruby's implementation fork, you realise that, well, it does a lot of the niceties for you that you might not always want to do. But the strange thing was, forking a thousand processes on my 300 MHz free BSD test box, the two different ways between using a V fork which does no copying a process data tool, and between actually using Ruby's fork was like a fraction of a second difference. Which is kind of odd, because in the Unix world, people who work kernel level often say, process creation is very expensive. It's not when you go and look at the Windows world. Process creation in Windows. And there's no real sense to that either, I find, because it literally does create a blank process for you when you create a process over Windows. But it just keeps a lot more meta-state and it's quite weird. But if there's not really that much benefit for that sort of thing where you've actually got it boiled into Ruby anyway, then obviously you don't even have to go and look at that at a kernel, you can just say, okay, well, I'm just going to use the boilerplate that Ruby gives me. So there's a lot of cases where I'd say, you know, the point of playing on a Unix box, 99% of the time is actually to stick in pure Ruby, because Ruby loves Unix. It lives in Unix. It's got lots and lots of support for POSIX standards and other meaningless terms like SUS3 and OpenX and... I was actually going to put a slide in that actually explained the differences between all of the different standards in Unix, and then I was looking at it and I realized I didn't know the difference between all of the different standards in Unix. There were just sort of like bold names that I've been writing boilerplate code for years to work around this and that and the other, and I've been stealing it out of books, which is probably a good point to plug somebody else's book, because if you actually want to experiment, you can't go better than get yourself a copy of Advanced Unix programming by Mark Washkin, because... He takes the pain away. It's quite funny because quite a few of the examples that are in now I've found my Ruby code tends to end up naturally following a similar sort of shape for one of a better term. But the place I think where Ruby is currently let down, there is one gem. There is only one gem that gives me absolute I must install fever every time I'm working on a big project and it's a vent machine. I was dipping in and out of it for a couple of years on various projects. Last... No, it was the tail end of 2007. I was working on an unfortunately cancelled social networking site. It was nothing particularly exciting. It was just sort of like a London nightlife social networking thing. The guy who'd thought it up liked to drink beer and the guy he worked for didn't want him to leave the company because he'd become a coder where it was a paper-click company. He'd become a coder where he'd previously been a paper-click analyst. But he was the only person in the company that actually fully understood how Google's paper-click system works. And basically the company made all of its money off of him sitting in a corner with a stack of about six monitors all up there with all of the various odd things he used to track for all of the paper-click edge cases. And the company made a lot of money out of it. About £10 million in a year got on the financial times top 16 startups in the UK and all of this. All basically off of this guy sitting in a corner just obsessed with how the numbers worked in paper-click. But he really wanted this nightlife site. So I got drafted into work as a tech architect on it. Which was kind of an odd job because it involved managing people. And I don't think I'm ever going to make that mistake again. But a lot of what they wanted to do, actually he wanted to have an awful lot of live chat going behind it. And there's quite a nice live chat system called Juggernaught which uses Event Machine. And I sort of know the guys from Juggernaught because they're based in London as well. We've spoken at a couple of European rails comps and talked crap basically in the speakers lounge for a long period of time. Also the guy who actually did the low level stuff, he's like 19 which meant he was 16 the first time I saw him presenting it. And I just thought to myself, I'm sure I wasn't that obnoxious when I was 16 that I could write a better eventing mechanism than Ruby's naturally got. I hope I wasn't anyway. But last year I got a bit of envy when I was sort of talking to them. I got half an hour before I was supposed to go on and give this lovely presentation about doing scalable servers and stuff which probably would all fall down in the real world. And they were going to talk about various things that they were doing and a lot of what they do is to do with push technology. I thought I could do that. Event machine is so simple to use. I could write a push server in half an hour. I could shove three extra slides in full of code. I know I want to do it and my coding partner he was like, you don't want to do that. I do want to do that. You don't want to do that. People aren't going to understand the crypto section I'm giving. The last thing they want is to have you stand up and talk about event machine. But event machine, once you start playing with it, it's really nice. It's like you write six lines of code and suddenly you've got this hugely scalable socket server. But I don't want to have to keep installing the C extension. So I really want to just import that straight into Ruby. So far, I've done quite a lot of experiments doing that. I don't really have anything that's got any real load. It's all artificial load. That's just not what the real world is like. But it's really quite nice. And it means, I mean, I've reached a certain point where my mind's just spun off into jet lag and it's saying, you just want to keep thinking about this. You don't really want to talk to these people at all because there's 150 of them. And it's quite intimidating audience because there's a lot of people here who really do know what they're on about, which I don't tend to run into in the UK because we've got a smaller development scene and mostly it's Rails programmers who've come in from PHP, who by now I could have stood up here. I could have given ten slides full of code, bamboos of them. They'd all have gone home and said I was a genius. It would be great. Or they'd have written dire tribes on their blogs about how unengaging I am. More than anything else, what I want to get across with this isn't so much a particular technical process about what you should do to get low level because quite frankly, the slides, they'll take care of it. And they're going to get updated because some idiots ask me to come and talk about this in London in two months' time. By then, I'll actually have probably stuck in order to stuff about the kernel level events and I'll have found something else I want to talk about and there'll be like 50, 60 extra slides. But what I really want to get across is just because you live in a Ruby world does not mean you're in a ghetto. We are as entitled to be system programmers as anyone else and we've got better tools for doing it. Our language is just better than C if you can also play with memory. And most people aren't aware that this is all there when they install Ruby. I mean, you flick through the pickaxe. Ruby DL gets two pages. You go and you hunt online for documentation about it. I don't recommend reading the documentation. The majority of it's in Japanese as unfortunately is the problem with the OpenSSL library, which is in a similar position where people occasionally have to use it and they trust it, but they shouldn't go near it. That's not my particular beef. It's the other half of my coding team who's obsessed with that because he spends all of his life playing with OpenSSL directly. But we have the right to be systems programmers. We have a language that makes it a lot harder to write bugs. We have a philosophy that's actually a lot closer to the Unix philosophy, the most C coders philosophy. And I think it's about time that we actually started to have Ruby implementations that took that to its logical consequence. I mean, we've got Rubinius, which, if it works, will give us something that will make us all feel ha-ha-small-talk people. We can outdo you at your own game. I'd love to see Ruby written purely in Ruby. It's an intellectual challenge. Everybody enjoys a self-hosting language, except brainfuck. I wouldn't want self-hosting brainfuck. I'm not even sure if that is how you pronounce it live. But it's about time that we actually, instead of looking at Ruby the way it's implemented now, as we have this memory bolt-on, we have this callback function bolt-on to C. It's about time we actually had a Ruby implementation. It was just written in it. You could implement the whole of Ruby using Ruby DL. Never have to have any low-level C, apart from the dynamic library linker. I'm not sure I'm the person to do it, because I spent most of my life living in a Blue Sky research lab. But I'd like to really see other people in the community start to get enthused by the low-level aspect of things. Because then we will get it. Because once people realise that you can have all that low-level goodness, but you don't have to have any of the risks that normally go with it, the processes you use for writing websites to just as effectively write system code. I mean, I'm a bit of a heretic. I don't write tests. I'm not test-driven. I'm not agile in the sense that it's used commercially. I'm very agile. I fork my own code all the time. It's great. But these processes have all actually been proven to work commercially. They allow us to push forward projects through a terrible scope. They allow us to get past all those horrible teething troubles, like Twitter going, oh my God, we can't cope with the load. We can get past that in the Ruby world because we have got mechanisms for actually doing stuff that allow us to have the processes to be able to validate and verify what we're doing. If that same energy was applied to system-level code, it wouldn't be long before the only thing you'd find on a Unix box would be this small kernel written in C. It might not be that long after that but it actually went, you know what? 99% of what's in the kernel can be written in Ruby because most of what's actually happening in the Unix box is the kernel is sitting there going, oh my God, I'm waiting on I.O. You know, I.O. I mean, it's the limitation. It's the fact nearly everything has to go to disk or has to go over a socket or whatever. And I think as a community, we need to start to appreciate that we haven't just got a ghetto tool. We've got the best bloody tool there is. I know Pythonistas will disagree. I often get in arguments with people who like Python. They don't always go well. But I think that's what I want to get across. I want us as a community to actually start thinking of ourselves not just as people who like web apps. I want us to think of people who can write any app. You know, I started in embedded systems. There's a few people on Ruby Talk who've got a similar background. I'd really like to write my embedded systems in Ruby. If I have to make a few compromises, all right. I mean, Charles O'Nutter's got some good stuff going on at least in principle with Doobie, with ways of doing annotations, of showing types and stuff like that. But even on low-level systems where you're putting something in a cockpit, 99% of the time you don't care about that. You don't need a statically typed language. And you definitely don't need a language that forces you to think in terms of physically iterating through code. So I'm going to wrap up because I've run out of ramble. But if there's a single point to this and to my flying 3,000 miles getting very drunk, meeting some very nice people and hopefully sleeping in a while, it's that we have the opportunity to do a lot more than we do. And the only thing we can break in playing with it and figuring out if we can do it, is our own boxes. And it's almost actually, apart from the famous Vic 20 bug that used to make Vic 20s go bang, it's actually almost impossible to literally destroy your machine by poking the wrong memory location with something. You know, all you're going to do, you're going to get a kernel crash, you're going to get a reboot. We should apply the same dynamic experimental attitude to the other areas of commercial computing that we're applying to web applications. And that's my point. And if a single person here sort of decides to agree with me, then that more than justifies the jet lag, I think the hangover justifies itself. So if anybody's got any questions, I would be quite happy to answer them. I'm really bad at answering questions. But I quite like it. Well, that's either stunned silence because you're all thinking, God, get out of the country. Or it's stunned silence because quite frankly, I mean, what? Yeah. I'll have this up on Slide Share later. As soon as I figure out how to actually get onto the network, which will probably require the help of somebody who can see. And it's going to get updated later this summer. It will have new stuff in it that will be more expansive. There are areas I've not covered in it. I've not properly covered signal handling. I've not really covered kernel-level events. I've only sort of glossed over the concept of shared memory. All of that's going to get fleshed out. And if I find the time I might even try and write this up as a proper sort of how-to guide. But in the meantime, there's some resources on the slide before this one. I recommend going and reading BJ's guide to IPC. BJ's guide to networking. They're the two best things I've ever found online for getting you past the early ramp up. That's not the expression I'm looking for. Learning curve. And apart from that, have fun.