 My name is Tony Arsiri, and I am not an engineering program. I'm giving a talk on three of my projects today, Rev or Backer and Raya. But I kind of decided the standing scheme is a bit redundant. So I am renaming Rev, which is probably one of the least Google-level names ever since I know, I guess. So I'm going to now call Rev Coolio. So anyway, I really love network programming. Pretty much ever since I discovered the internet, my network programming has been my big thing. And I kind of get this idea, the most furious view of the world this way. You know, they really know their application, their web framework, and rack. And then below there everything is black box. And really quick, I'm curious how true that actually is. So how many of you have really dug into the source of any web framework? Pretty much anybody in the room, how about people who've dug into rack and see what's going on? How many of you have really looked at the source code of any web server really dug into it? Well, quite a few, that's a little bit surprising. How many of you have dug into VM? A bit fewer, but a bit more than I was expecting. How many of you have really dug into the source of the OS kernel? Yeah, a bit fewer there. So, you know, I really think there's a lot outside of HTTP. And most of the stuff that I was doing prior to web programming was not related to HTTP. So in 2003 I started this project called Distribute Stream. And the idea was it was a bit torrent but better. So I originally started writing a bit torrent library in C, you know, I saw. The bit torrent was originally written in Python and I was being seized all of it at the time. And I'm like, oh, rewrite in C for speed, it'll be faster and better. And as I started doing that, I really started noticing all these problems with bit torrent and I thought it's all better. And I spent about two years trying to develop a replacement protocol to distribute stream in C. I learned a lot of stuff about network programming in general, but didn't really make much of it. So in 2005 I had to step back and admit I'm not really being productive. I need to learn some new language that's better than C that would make me more productive. And there's this quote, I believe it's by Bill Joy, I tried to find it again after I first saw it, but I can't. But it gives a little something like, there comes a time in every C programmer's life when they realize that they've debugged enough memory leaks and enough arithmetic errors and written enough linked lists and enough hash tables, and they may have all done a language with a higher level of abstraction. So I was checking out a bunch of languages and I saw Rudy, my roommate at the time, was rolling at Ruby, but, you know, I was all seized all at four speed and, you know, Rudy had this reputation of being pretty much one of the slowest languages ever. But I heard, you know, I sort of overlooking Ruby for that reason, but I heard about this new VM called YARV. And, you know, seeing that I'm like, okay, well maybe Ruby will be faster soon. And what really sold me on Ruby was Rails. Rails had just come out and started working for a startup company. And they wanted, you know, they wanted to do web programming and Rails was really good for that. So in 2006 I was working with the senior project team at a local university and I had them start rewriting Distribute Stream into Ruby. So I sort of had this idea that, you know, it would be a bit more to be the Python peer-to-peer protocol in Distribute Stream to be the Ruby one. And I was using the event machine to handle, you know, planning for massive numbers of connections. And the real idea that separated Distribute Stream from BitTorrent was that Distribute Stream could have this install be the entire network. It was all centrally managed. And then you could apply collaborative filtering algorithms like Slip One to sort of predict what the transfer rates would be between peers that they'd never have exchanged any data of. So at the same time I'm working with Rails and we were having a lot of problems meaning we were deployment. So, you know, the initial way that you could deploy a Rails app was with CGI. With CGI every single request it had to do with the entire Rails framework and that was really slow. And as CG came out, you might remember Lighty. A few people, wow. Lighty used to kind of be the main way that you were supposed to deploy Rails apps. So it was pretty good at the time for CGI. And it's another thing that CGI came out. But what I thought Rails really needed was something like Apache Poyote. Apache Poyote is an HTTP connector for Java. So it's basically your web app can talk HTTP directly. Sort of mold and writing that myself, but I didn't have to because it's a thing called Longroll came out. And it really came out in 2006. It's up through Evan's keynote. Sorry, Evan, that might have changed some things. I had actually seen it. So I was waiting for Matt to announce that Yard would be coming out that Christmas. So I was really expecting it for some reason. But that didn't happen. There wasn't any Yard. Where is Yard? And, uh, not being me, it's at Panda. But, uh, there was an open bar at the hotel when I started drinking. And then they had this thing where you could come ask Matt's question. I got a pink hider in it and Matt's, I'm sorry about that. I was a little bit drunk. But after that, I met this guy. You might remember him at the time he had hair. That would be Zed Shaw, if you don't recognize him. You know, I'd seen him earlier that day. And he got up in front of everybody. And he said, I'm Zed Shaw. And I wrote Longroll when I received thunderous applause. So, you know, people used to like Zed. So I can't remember him easily. Quite a lot of people care more about, like, speed and performance and stuff like that. Our Rails app is actually going into infinite loops to just deploy it on Longroll. If you ever install Longroll, you've probably seen this thing called the CGIEOF multi-part patch, I believe. And there's this bug where anybody can send an outform to a bus to your server and put an internet loop. And I started talking to Zed about, like, oh my God, you've discovered this crazy security vulnerability we're trying to keep under the wraps for now until we can get a patch out of it. And I was talking to Zed about the sturdy stream and using event machine. And he said, you know, that machine basically sucks. And she worked on his project, which was Ruby Event. She was a writer-rapper for LibEvent in Ruby. And he had some problems with that. Never really done the work down. Never really did. So meanwhile, later at the conference, there was a number of board, I don't know if any of you know about this. But basically, Matt's employees, she got together on Sunday night. And I created this new repository for 1.9, sort of freeze commits to the mass branch and merge yarn into new. I don't know if I was in any way influential with that. But that was a really good thing to see come out of some of the conference. So 2007, you know, this project built on event machine. We're having some issues with the event machine. I became a committer. I tried to fix as many of these problems as I could. And basically, the code base is really hard to work with. And I just got really frustrated. So Rev actually, I didn't start. It was started by this guy named Frankie Rines and Shaw. It was to be a Ruby wrapper for the LibEV library, sort of like LibEvent, but maybe a little bit cleaner, a little bit easier to use API, that sort of thing. But both of them soon left, basically, before either of them had committed any code at all. So, you know, it's easy to complain about something like event machine in-house, right? But I think the best way to complain about stuff is to try to write something better and see if you can. So, Zed and I have kind of been friends, anyway, on any blog themes. So I introduced him to our designer, Rines Mard. She made this, if any of you remember it, the Zed's fucking awesome blog theme. And shortly after she made that for him, Zed posted, you know, a blog post. I'm sure you don't know about it. But Zed was basically gone, leaving me, like, this Frankie guy disappeared. So, basically, I was the only one left here to try to write this name of that framework. So that framework is now clearly, you can find it on GitHub there. So, where are the kind of things you'd use it for, right? If you need to handle a large number of incoming connections, there's this whole C10K thing you might have heard about. Like, how can you scale to 10,000 plus connections, right? We're using something like a library like WebDB, you can do that. You have large amounts of IO, if your program is really IO bound, primarily. It doesn't use a lot of CPU. You know, this is something you could really look at. And the other thing that this is really good for is shared state between connections. When you have shared state and you're using friends, you have to synchronize it. It's hard, the concurrency is hard. And this lets you sort of eliminate the concurrency and do everything with callbacks. So, what sort of things would you actually write with this? WebSpiders is a big one. After DistributeStream, the next thing I used it for was my company and a lot of WebSpiders. I used Red to build a massively concurrent WebSpider. HDDB pushes servers to maintain persistent connections. So, again, it's going back to that massive number of connections thing. And chat servers like I am, you know, that sort of thing. That's a good shared state problem where you're trying to take some state and deliver it to all the other connections. So, RevSupported Platforms, it works on MRI 1.8.6 Plus. It also works on YARV. And now I finally got working on Revinius Head after some changes I made and some changes you made there. So, here's a quick crash course on event programming. So, normally when you're trying to get stuff off the network, you just call method and give it some prams and get a response. But the event programming, instead of doing that, you use these callbacks. So, if something's successful, you have a non-success callback. If something fails, you have a non-failure callback. Another way to do this is with a block. So, you can start your request and you get this response object back and you can look at that and go, is your request successful or not. So, event machine kind of reinvents the whole IHO layer. So, event machine is sort of originally conceived to be something like a multi-language event library binding. And it kind of didn't succeed at that for some reason. It only succeeded in the Ruby community. And basically it reinvents the whole IHO layer. I built the Coolio with the Ruby primitive. So, all the sockets that use all the IHO objects come straight from Ruby. The SSL stuff also comes straight from Ruby. So, there's really two things you need to know about Coolio. You have the event loop object. And this is the only thing in the entire library that are blocks. And then you have these event listener objects that you attach to that. And those are all in the blog. So, here's some quick example code. Here you have, this is your basic echo server. You have a connection class that has these three callbacks here. When something connects to it, when something closes from it, when something reads from it. And it's really simple there. Whenever you read something, you just write it back out, right? And then down there at the bottom, you see it creating the TCP server. You give it that connection class. And then you attach that to the event loop and run the event loop. And then I'll just sit there and block receiving incoming transactions. For getting HTTP, you have quite a few more callbacks. So this exposes a lot of the events that can occur throughout the HTTP request lifecycle there. So this basically will connect to a web server, make it get request and print out the results. It's also got file system monitoring. Anyone ever heard of Watcher? Yep. Few people. Yeah, cool. Watcher is built on rev and uses it for file system monitoring. So this guy named Ryan Dahl contacted me after writing this web server called Ev. He wanted to support using rev side by side with Ev. Ev was based on LibEV as well, so it seemed like a really good fit. And originally when I had backported, I wrote rev originally for 1.9 and backported it to 1.8. And Ryan actually found some really nifty tricks for interfacing with 1.8 scheduler and I adopted those into rev at the time. And then you know, Ryan kind of gave up on Ev. It never really got popular, but he took the ideas of having LibEV back end and created new .js out of that. So some next steps that I'd like to happen to Coolio. asynchronous file IO. So if you've ever used Node, everything is asynchronous. And to accomplish that, there's sort of this companion library by the author LibEV called LibEIO. It puts all these blocking requests into threat pools, so they can be asynchronous too. And then Roger Pack has offered a $50 bounty to anybody who will swap out the LibEV back in and switch it to LibEvent. LibEvent just has better support on Windows. So then I discovered Erlang. This book came out here and I read it and I thought Erlang was really awesome. I have a quick video to play for you here. It's kind of loud, so please cover your views if you don't like loud noises. Hey, do you want to feel so parallel? Erlang! I'm quoting the language for people who need gratuitous amounts of parallelism. It's almost literally hyperland. It's like adding parallelism to a pure Euclidean space. Sound of the alarm. You're going to be uncomfortably parallel. What's that? You want databases? We'll help out. Pedabases. Pedabases. Pedabases. Pedabases. You'll be good at them. It's a functional language for men. Monctional. Monctional. P-map puns. Functional. Functional. Parallel. P-map is our man. Four of us and you're the one who has the room for. You'll be so fast that Google will be like slow down. Parallel ports. So much parallelism. Yaw! And it's parallel. Search parallel, holy parallel. Parallel. Parallel. So many children. So many children. 4,096 children. Hyperland. You're children and they'll be good at databases. And children run abnormally fast. The run as fast as Google. What's the running thing? The Google. The run as fast as Google. It's actually Google. And it'll be a tie. It'll be the quarterback to battle with you. Or they don't gamble with parallelism. Awesome. So, we have a lot as originally, or I need to distribute a stream that is playing around with a bunch of concepts for how to use threads and how to do IO efficiently and how to manage CPU. And sort of the dream all I came up with was you have a single-event IO loop. You only run one thread per CPU core. And then to get higher levels of concurrency beyond that you use wide-way user-stakes processes. There's a couple tools you can use on 1.32. There's this thing called Fiverr's which would appear in Ruby later. And then on Unix there's these things called new contacts that you can use to do that. And both of those are basically co-routines. So, Erlang had really created a good implementation of this model because, you know, it blew my mind when I originally saw it. And at the time, YARV had gotten fibers. And so I tried to take the ideas of Erlang and build them on top of fibers. And on that Christmas, YARV was released and I was a happy band-up. So, in 2008, I released Revactor. And Revactor is actors for Ruby 1.9. So, one of my big motivations for this was I written a web-spire for a company in Erlang originally and my boss kind of found out and said I couldn't use Erlang so I decided to do Ruby instead. So, yeah, the basic idea is if you take co-routines and we're ever going to normally block if you're doing synchronous IO, but you can use fibers and you can switch back to other fibers that can do other things while you're blocking on IO. And this technique was seen later in a couple other libraries. As far as I know, it's the first one to do this. I was doing it on 1.9 before it was even released. But a couple other libraries that I'll never block in Unisynchronia and I'll talk about it in Unisynchronia later. So, you know, I really didn't like that model. You know, a lot of people sort of insulted and say things like callback soup and callback spaghetti and callback hell. So, Revactor, it was the first I still think it's the most comprehensive solution for combining fibers and events. So these other libraries, they just, you know, give you synchronous IO on top of an invented backend. But what Revactor does is it lets you use fibers to communicate, or it allows your fibers to communicate with each other and block on each other and send each other messages. Here's a quick example of how you would write an Echo server with Revactor and really, you know, this looks exactly like the kind of thing you would do with threads, right? You have, you create a listener socket, you have a loop that listens for incoming connections and then it creates the, in this case, which is your concurrency-permanent events. It's there and listens for you, listens for the client to write stuff and just writes it back out to the socket. So since Ruby seems to love web apps a lot, you can write asynchronous web applications using this web server called Rainbows. And Rainbows is the Unicorn web server modified for asynchronous bus processing. And Unicorn, if you know, it will automatically spin up 1 VM per CPU core. So if you want to have a single web server that takes advantage of all your CPU cores and lets you do a venture of the programming, you can use Rainbows. And here's some quick examples that really love the Rainbows Bing function. It's pretty neat. But I don't really know how to use Rainbows and never really try this. But if you're interested in doing web programming with this, take a look at Rainbows. So I need a volunteer from the audience. David Tabor is already here. So we're going to do some live-action role-playing of the actor model. So come up here on stage. I have a couple of messages here. This one is a ray and this one is a hash. So I'm going to be off doing something, but come and put the array in my pocket. So yeah, I'm up here speaking. I'm up here doing my thing. Now I'm ready. And I'm going to look in my pocket for a hash. And all I have is this. So now I'm blocked. I can't do anything. So come put the hash in my pocket. And now I've got a hash and I'm blocked. But this list is back in my pocket so I can help me do my thing. Until I do care about a list, I know where it is. So that is my attempt at doing live-action role-playing of the actor model. So I'd love to continue talking about revactors, but I'd really like to get on an array. So if you want to know more about actors that are back there in general, there's a website for it. So you know, I really became disenchanted with this whole idea of faking synchronous eye on top of fibers. And I have another little video to share with you here to explain that. Are any obvious disadvantages to patching a blocking library? To use fibers at the socket level? It's far too much effort to have to rewrite large chunks of every library just to make it a sink and fiber aware. Just use threads. No, not a happen. Even if this is a personal project, I won't use threads, so I guess they'll just have to continue rewriting parts of a bunch of libraries as I go. Oh, wait. I remember who you are. You're the omniscient guy who needs rock to scale and you think threads are going to stop you from servicing your rock-millions of users. And I remember that you hate fibers for whatever reasons. I don't hate fibers. I just think a lot of people are misusing them for silly things. My boss wants all of my scalable async platform backend. And from experience, I'd rather wrap the async code in fibers than have tons of callback spaghetti. What scale are you really talking about? Because last time you just started blurting the word cloud with me like it meant something special. These can mean large scale, obviously not right from the beginning, but as functionality and the client base grows, it needs to scale up majorly within interfaces to pretty much every large social media-related sites on the web. Do you have any actual numbers or is rock to scale really the best description of your capacity plan? I can't talk numbers. The platform is still in early development stages, but it needs to be able to make many hundreds of requests in real-time. You don't know what the numbers are and yet you're saying hundreds of requests in real-time that can work just fine using threads. Okay, let's say thousands of requests in real-time per online client and it should give you a rough idea because it's too early in development to be able to be too specific. That just means you are doing no real projections, which means it's purely technological masturbation and it's the thousands of requests per client in real-time that's more than likely a completely silly sentiment for the plain and simple reason that a particular user is unlikely to even want to read all the results of thousands of requests each time they visit. I'm not a boss, I just write the code required to make shit happen, or the data fetched over hundreds of thousands of requests will be processed to averages and some statistics. You should tell your boss to pay for some consulting from someone that's built these kinds of systems before you're already set in stone that you need raffle scale, which is both architecturally wrong and bad for the business, it's only because you want raffle scale that you seem completely certain that threads are inappropriate for your use cases. You really don't have any idea on the specifics of the system so you cannot judge what sort of scaling is required. You don't seem to understand what sort of scaling is required either and you clearly don't understand the difference between threads and fibers in this kind of context. I understand the threads have more overhead on fibers and any shared data would require locking and syncing. Fibers still share state and if you won't be patched at work libraries to use fibers it becomes hard to understand when your fibers are yielding so you still need locking anyway to provide transactional mutations to shared state. Fibers do not run concurrently. That doesn't change the fact that fibers still share state and by trying to stick or pretend synchronous API on top of things there's a sense of where context switches are happening and run a follow-up along with the problems of threads which will still scale very well. I don't care. Ruby on ground. I don't care. JRuby and Iron Ruby already support concurrent multiting without any sort of global interpreter lock and Rubinus will soon support it too. This means one instance of your application will scale across multiple CPU cores with a sync program that will need to run a separate virtual machine for CPU core. I don't care. Yeah Ruby Katz, the fucking core developer of Rails 3 recommends that you use threads. I don't care. To varnish cache which is arguably the fastest and best HTTP cache available uses threads to handle multiple concurrent connections. I don't care. Plansql which is arguably the most popular open source database in the world uses threads to handle multiple concurrent connections as well. I don't care. What the fucking fuck? I argue with people who think they need rawful scale and that fibers are the only solution. So yeah, that actually isn't a strong and argument there. Large parts of that are verbatim of a real YRC conversation. So, really what it comes down to is a principle of least surprise, right? Well like, on Ruby pretty much the principle of least surprise is not to use a benefit program and it needs to use threads. And if you want to do a better programming there is a language where events are the principle of least surprise and that's JavaScript and I really respect Node for that. Node really found, you know, Node just combined events in JavaScript in such a great way that it's awesome that I recommend Node. So what happened is Erlang kind of fizzled. It's not that it failed per se. There's a lot of really cool projects on Erlang. But all the hype eventually died down and saw this really neat passage from the book Coders of Work which I highly recommend. But that was just some of Joe Armstrong's thoughts on Erlang there. And at the end of that he said to make Erlang popular, you know, Microsoft might stick some curly braces on it. And I'm like, no, I could just use some Ruby syntax. So Zed Shaw, he's really passionate about languages, about getting people to learn how programming languages work and that sort of thing. And Erlang is some really horrible syntax. And what really makes Erlang's syntax bad is it's just adding insult to injury in terms of Erlang is just conceptually so different from every other language that when you have this like weird pro-audi syntax, it makes it that much more impenetrable. So I started working on Rea and started on it in 2008 and I really wanted to leverage Erlang's strengths. So Erlang's main one is non-stop systems. So, you know, you might have seen the Erlang claims that it gets 99.999, blah, whatever, uptime, but really the core idea there is that there's no theoretical reason why you should ever need to stop the Erlang VM. And real-time programming, together with the non-stop programming, really all the other ideas in Erlang come from these two ideas. So to have a non-stop system it really needs to be faultful and if you have faults and you don't tolerate them, your system's going to stop. Code swapping, if you want to make any changes to your code, you've got to be able to do that live and be able to do it trans-actually and inside a concurrent environment. And distribution is really the final strength of programming there. So I wanted to take Erlang and add Ruby syntax to it. The other thing about REA is it's an immutable state of language. And really, when you look at the whole functional language versus imperative language thing, I think the core idea there is really immutable state. That's really the only major difference between those two language families. I was actually talking to Brian Ford about it has been proven that the continuation passing style of functional languages is equivalent to the SSA model of imperative languages. Here's an obligatory Lady Gaga slide. So I presented REA at Erlang factory back in 2009. At the time I had a mostly working implementation and the big thing I was excited about was Erlang has this idea of a gen server where they take the actors of Erlang and wrap them up and make them into something that's almost an object. So I took that and actually put sort of a Ruby style class syntax onto it. At the time I had a web framework called Ryan. This is what really inspired me to do concurrent objects. I saw this quote from Alan Kay. He had this idea of objects really being these things that are sort of like individual computers. Web services are that kind of thing and they talk to each other with messages. So Node started blowing up in 2009 and it really got people interested in event driven programming. Which I think is neat because Node and Kalio share some of the same technologies. They share these same technologies although I haven't had the Libia for yet. But in late 2009 as I was working on REA it was the OMG too slow. It was really really slow as I tried to develop a standard library. It just wouldn't work because each file I added would take that much longer to live. Basically I really didn't know I was doing the first time I wrote REA. So I decided if you fuck up the language I wouldn't do that. So in 2010 I basically completed the REA or REA it's over 9000 times faster than I had earlier. I learned really how to write a language on top of your language by failing the first time. And I really started looking at the early compiler itself to get ideas. So I kind of tried to do everything myself the first time with their ideas. And definitely that was a bad idea. And the neat thing that came out of this was now everything in REA is an object. So originally I didn't have any sort of non-concurrent objects in the language but I ended up adding more of these immutable objects. So let's take a look and we'll see if that will work. Yeah. I also know how to live out there. A little bit smaller there. Can everybody see that? Oops. Alright, so here is the REA source tree. I'm going to start this thing called IR. It's just like IRB. So there it is, running on top of your language on there. So a quick first example, hello world. So if you're in Brent's talk you like this idea of putting puts on the end of things. And the main reason I do that is REA doesn't have a top of a little scope, mainly because Erlang doesn't have that. So if you wanted to actually do the more Ruby-like way, you'd have to do this. There's hello world again. So REA has this where this would be an array and review. This is a singly linked list that you can call out there on it like that. And I actually made this bank syntax first class here. And what that does is it actually modifies the receiver. So it's a medieval state language where you have to modify the receiver, but what it's actually doing is changing the binding. So the list is now bound to a new version of the list there. And the other neat thing it has is that if you've ever done a Python you might have reused list comprehensions in the past. Also there's two goals. Again, this is something you may have used from Python. But these are basically the equivalent of arrays as opposed to a singly linked list. And another neat thing it has is pattern matching so you can have these deeply nested structures. So what that's actually done is bound all these variables there. So it has modules just like Ruby, plus two, six n, plus two. So modules are just sort of like collections of functions. They're not quite as powerful as they are in Ruby. So you close two and get 42 there. So then the other thing is there's no real idea of re-opening modules or classes. So I can actually be class out of here. And the reason for that is sort of to get closer to Erlang's hopcode swapping semantics. So this is declaring a class of immutable objects here. So it did have this little shortcuts in the back here. So you can actually bind an instance variable directly from initialize there. And I can define a method here. What we've done there would be a variable names there or whatever. So the instantiation syntax, I do want to add .new. I don't have singleton objects yet. But so for now, do you instantiate something you have to sort of do or type on style there? Do you add to .add and get that? Lambda, so you can do something. So fun actually comes from Erlang, but if you've ever used Erlang, the fun syntax isn't very fun. What I've done is sort of taken the stabby lambda syntax from already 1.9 and adopted that. So I'll just do animals 2 there. And there's some indicycable Erlang about where in the code server that function happens to be located. And then the invocation syntax here is just like a pipe on actually. So you can just call it like that. So getting on to the naked concurrency stuff Erlang calls its actors processes I'm sort of taking that same name there. What I'm going to do is bind this to PID. And spawn is the method you call to create a new process. So when I reached into my pocket to check for those messages there what I was doing was this receive thing. And receive sort of works like a case statement, but you don't really need a case on anything. You're just kind of doing a case on your mailbox. So this is going to wait for a symbol called foo. And when it gets it's going to tell you I got a foo. There's not up there and that creates a new ID on there. And then to send that PID a synchronous message you use the bang operator there. So we're going to go ahead and send a foo and then what I do it says I got a foo. So that concludes my little quick tour there. Let me see if I can actually get back to Kina here. It's really quick. What's the mirroring thing? It's supposed to be command or control option command to eat. Alright so I don't know if any of you saw it but at RailsConf and at Oskon, Ilya Grugorin talked about EM synchrony and you know I'm not really opposed to EM synchrony. I think it's pretty cool but EM synchrony basically does the same thing as never block. Trying to take fibers and let me do fake synchronous IO with them machine. And the thing about it is he really started popularizing this ID and they're sort of doing it ever since. I think EM synchrony is mainly in the raffle scaling. He demonstrated things like Rails running on top of it. I don't think the vented back ends are really good for the typical things that Rails apps tend to do but because it's popular I sort of dust it off when I'm a backter. I try to modernize them. So if you're interested to check them out I used to like Ray a better. One of Ray's biggest drawbacks in recent history is I've done a really bad job of documenting it but created a new website sort of inspired by coffee scripts and I'm going to load it up with tons and tons of examples. So one of the neat things that is sort of being worked on right now but a few of the contributors on the project is a peg grammar. It's available here on GitHub. It's both on top of this this peg generator for Erlang called Neotoma. That's sort of like true top rate. And the peg grammar is being written by these two guys here and it actually has tests. I don't have any tests on the grammar it's all full of the other tests we have on the language. Some of the neat things that a peg could bring right now the current parser I've written has a lot of trouble handling white space. Removing semi-colons from the end of lines is surprisingly hard. It supports nested interpolated strings so while Ray has interpolated strings you can't nest them and with a peg that's pretty easy. And then everybody loves these like slash-limited projects literals but they're actually kind of hard to implement because they're ambiguous with things like divide for example divide equals. So a peg can really handle this well. And one thing I don't have on the slide that might be possible in the future right now Ray has mandatory parents on everything but with a peg it might be possible to remove those in certain cases. So some of my next steps for Ray I hear one of the things I've been working on lately is singleton classes as I was saying with .new I would like your people to be able to call .new just like a Ruby. I want to get the concurrent objects back that is something that is sorely needed because basically everything in Ray is either a readable data or a process. Name spaces, we all have name spaces and default and keyword arguments and finally I'm going to look at some ways to support meta-programming there's no way I can ever come anywhere close to whatever it has but I'd like to support some one-day cases of it. So one more thing before I conclude my talk here compilers aren't hard I know a lot of people think they're the sort of impenetrable black box but if you actually take the time and start looking at them and study other compilers you know really I had been afraid of it for a while but as soon as I did I discovered it's not that hard at all so I'd encourage you to create your own freaking awesome programming language there's a book by the same name that I would highly recommend as well if I can do it you can too alright