 Okay. Thanks, everyone, for allowing me to speak on something non-Ruby-related, but we'll see. I'll talk a lot about Ruby and my history with Ruby and how that led to creating Phoenix. So for those of you not familiar with Phoenix is an Elixir web framework. So it's using the Elixir programming language and we borrow some ideas from Rails, and I think you'll see that they're quite familiar. I'm Chris McCord. I wrote a book called Medic Programming Elixir. So if you're into Elixir, go ahead and check that out. And before I get started, since I'm talking about a web framework, I'm going to talk a lot about what we do better than Ruby and better than Rails. I do want to say that I love Ruby. I worked at a Ruby consultancy doing Rails apps for about five years professionally. And I loved most of it. We built dozens of Rails applications and shipped quite a few to production. So I've gotten experience with a lot of different problem domains. And what we saw over and over again was clients wanted real-time, persistent connections, and we were running into a wall with Ruby. So my start away from Ruby started with a gem named Sync that you can't see here. So by show of hands, who knows what Sync is. It does real-time Rails partials. So the selling point of Sync created about three years ago is you can basically, I need to shrink this a little bit. You can't do that. It's cut off. You can basically just replace render partial in your Rails app with Sync partial. And you move the partial to a special folder. And if a user changes on the server, it's going to re-render all the user templates on the server and push that into the browser and have those update in the browser in real-time. And it just works like magic. You don't have to think about it. So people are using this and they're quite happy with it. But the issue is it has some shortcomings. And when I first created it, my final result was fun and it worked. But what was required to make Sync work using real-time events in Rails kind of pushed me out of the mindset of Ruby being this really elegant language. And at least we're solving these particular problems. So what I ended up having to do was write code like this. I need to. Okay. Because we want to see code, right? So this is code from the Sync library. And what I had to do in Sync to do real-time events at all in Rails was I had to run Event Machine. And Event Machine is an event loop in Ruby. So it uses JavaScript's concurrency model, which I'll talk about in a second. But while I was building this, I said, okay, I can't block anything in my Rails app. How do I communicate to do pushing out events to the browser? So it's like, okay, I'll add Event Machine. But the issue is sometimes the Event Machine thread that I started next to my Rails app just died and went away mysteriously. So I had to write code like this. Anytime we call into Event Machine, I say, are you running? And Event Machine will say yes, I am running. But then it turns out that the Event Machine reactor thread is actually dead, but Event Machine still thinks it's running. So you have to say, okay, cleanly shut down Event Machine by doing a stop and then an instance variable set because there's no other way to tell Event Machine, hey, you're actually dead. So the code required to make Sync work was a lot like this. So something that should be trivial, especially that we're seeing in other languages, pushing out real-time events was very, very difficult and very fragile. And we also had other caveats. To make real-time events work in Rails at all, I had to add a fey and run a separate fey process next to my Rails app. So this made deployments a lot more costly, especially we were deploying most of our client apps to Heroku, so suddenly we had to spin up another worker or process just for fey if we wanted to use Sync. And we had to use Event Machine. So Event Machine is an event loop in Ruby, like I said, and this is JavaScript's concurrency model. So the moment in Ruby that we said, okay, how can we do something concurrently, the moment we say, let's look at JavaScript, that should make us feel scared, right? So it was just very fragile, right? There needs to be better ways to handle these things. And again, it was unreliable. The thread would just die. There was nothing watching over that thread, so I had to programmatically, defensively check if a thread had died. So it just started giving me this feeling in my stomach that maybe Ruby is not the best thing for solving these problems. So at that point, I started looking around saying, okay, if Ruby is not going to be the best thing for concurrent connections, what are other people doing? And this is when, this is a couple years ago, a WhatsApp of $19 billion fame recently. They weren't quite as large, but at the time I looked them up, they were quite transparent about their architecture. And they were receiving 2 million connections per server. So the servers were like, they're pretty large, they're like 24 core, 96 gigabytes of RAM, but they were getting 2 million users at the same time on their servers chatting together. So this gave me some language envy. And I also heard that they had 400 million active users with 30 engineers supporting them. So 30 programmers supporting 400 million users. So I was like, okay, what is the secret sauce here? And they were using Erlang. And I remember I had heard about Erlang. I didn't really know anything about it. It's a functional programming language that's been around for decades. And up until this point, I had paid no attention to it. Even throughout my undergrad computer science degree, it was never mentioned once. So I started looking into Erlang, saying, okay, how are they doing these things? But the Ruby tooling and the Ruby mindset of getting up and running quickly, being very productive, having nice tooling, I didn't find there. So it was kind of difficult to even get started in Erlang. But I had a lot going for it. I looked into, okay, this Erlang thing is able to scale be super successful. What are they doing? It's like, well, they've been around longer than the Linux kernel. And Erlang handles about half the world's telecommunication traffic. So if you are using like a 3G network in Europe, there's about a 50% chance that your traffic is going through an Erlang system. So it's out there. It's handling tons of traffic to high success. In fact, some Erlang systems have reported 9 nines of availability. So they're out there. They're running for years and years and being highly available. But I wasn't happy with the getting started experience with Erlang. But I remembered someone very close to the community, Josev Lim, had gone off and started this language called Elixir. And I remember seeing when he announced this, however, I think in 2011, I remember thinking like, he's crazy. What is he doing? Like, and I just ignored it. But then coming into Erlang, I was like, okay, wow, maybe I need to do look at Elixir. And, you know, if most of us here probably know who Jose is, right? We've used, he's a Rails core team member. He wrote device. He wrote simple form. He wrote inherited resources. He wrote, if any of your gem files in your Rails apps, you're going to have something, some code that he wrote guaranteed. So I started looking into Elixir. And that's where I fell in love with the language I fell in love with how he took like spiritually some things that we like in Ruby, and brought them to a new paradigm. So he's focused really on tooling, an elegant language, and he was able to take advantage of all Erlang had to offer, but with this really, really nice language. So this is what started my path into creating Phoenix, saying I want a web framework for the modern web that can tackle concurrency and the real-time connections that I need. So we like to say that Phoenix is great for building whether it's like HTML5 apps for form-based applications like a standard Rails app, JSON APIs are great, and also distributed systems because Erlang and Elixir are really great at running a cluster of machines together and operating distributably. And since it's written in Elixir, it's a beautiful syntax, which you'll find it's on the surface very similar to certain things in Ruby, and we're highly productive. So we focus just like the Ruby community does on great tooling and great productivity. But we do it with a really fast runtime. So before, you know, I said I worked at a Rails consultancy. We pitched Ruby and Rails to clients quite a bit, and we always said that, you know, the productivity always outweighed performance for us. We said, you know, as a community, we almost like wear it as a badge, and Ruby saying, like, we don't need Twitter scale, right? Who needs Twitter square? Twitter scale. And so we say, okay, you know, performance is not that big of a deal. But it turns out, even at small scale, a lot of our clients ran into performance issues, especially if integrating with third-party APIs. We had one Rails app that needed to call out to about seven different APIs, anytime a user visited after being away for a while. And we had to run a ton of sidekick workers for that. So something that's not that hard computationally to call out to seven APIs ended up being very, very expensive to run. So it turns out that we actually do need to, you know, change the scale a little bit. And with Elixir and Phoenix, we no longer have to make this decision, right? We really focus on productivity, but Phoenix can get both of performance and productivity because Elixir and the early runtime. So throughout the rest of this talk, I'll talk about how that works, so why we're able to really give you productivity and how the runtime gives us this really nice performance. So we're going to talk about the features of Phoenix. Some of these are going to look very familiar. So we borrowed some ideas from Rails. And if you saw DHH's keynote, the action cable feature in Rails that we were able to give back and they borrowed some ideas from us. So it's kind of cool to see that go back and forth in both directions. We'll talk a lot about how we have great productivity and tooling in the performance that we're seeing in actual real systems. And then I'll focus a lot of the time on channels. So Phoenix channels was kind of born out of the idea that I had with Sync originally in Rails. And then we'll review how applications are made with the Erlang platform and how that's going to play into building applications for the modern web. So productivity, we focus a lot, almost exactly the same thing you do in Rails, right? Our short term productivity is we have great documentation and guides on phoenixfirmware.org and we also have generators. So we have like scaffolds that you'd be familiar with your Rails app, but we even scaffold like JSON APIs. And we eliminate trivial choices. So we focus a lot on commission over configuration, but we don't go quite as far as what Rails goes with, but we still want to eliminate those trivial things. Like where do I place these files? How do I name them? That's a done deal. That's taken care of. But we have shifted and said that we actually value long-term productivity just as much as short-term. Because this has been an issue for me working at Rails Consultancies. We inherit a lot of legacy Rails apps. And maintainability has been a nightmare for us. If you've ever had the misfortune of upgrading like a legacy Rails app from an RSpec 1 test suite to an RSpec 3 test suite, I've spent months of my life literally just getting the test to pass on an app that the QA verified that the app works on Rails 4, but the test suite hasn't passed because RSpec completely changed. So maintainability has been a big focus for us and we get a lot of that for free from the way Erling and Elixir applications are designed. And we'll talk a lot about that. And we have an easy install. It's a little bit quirky today. You have to give it this URL from GitHub, but soon you'll be able to say mix archive install hex phoenix. And mix is like a combination of rake and bundler in one. But after you've got phoenix installed, you can just say mix phoenix new, give it an application name, and this is just like Rails new. It's gonna generate an application for you with same defaults and in about 30 seconds you can just run the server command and have a phoenix server up and running. So very similar to the Rails setup, Rails really changed the game as far as how you can get up and running quickly. So we tried to match that with, okay, run a couple commands, get an app up and running quickly, the decisions are made for you and you can start getting to work right away. And we can do some code. So if you're not familiar with Elixir, this is still probably a little bit familiar. We have this concept of an endpoint. So the problem that we had in Rails historically is we have this monolithic design or integrated system, as we now call it. In phoenix, we said, okay, we don't want anything to be global. We don't want anything to be a monolith, but we still want that integrated system experience, right? There is something valuable about saying, you know what, we wanna have one thing that does something that we expect. We don't have to configure a bunch of different things. So in phoenix, we have kind of the same feel as a monolith, but nothing's global. So by default, your application has an endpoint that runs your web server and stores your middleware, your base level middleware. So you can kind of think of these plug calls that you see here as rack middleware. Plug is our middleware abstraction, kind of analogous to rack. And you start one endpoint, but your phoenix application could have 10 endpoints in it, so it's not global. By default, you get one, so it is a monolith, but as soon as you wanna add another web server running on a different port, you just add another endpoint. And we also have a router, like you would expect from Rails. So the very last thing your endpoint does is it plugs into a router. So plug is a very lightweight abstraction compared to rack, and you interact with it at every level of the phoenix stack. So in your Rails app, you'll typically have rack middleware, and most of the time you don't touch that, you don't think about it, and then you'll do before actions or before filters in your controllers. In phoenix, we have one level of middleware abstraction, and that's plug. So throughout the entire stack, you're just plugging these functional transforms and they operate at the same level of abstraction. So we plug our router once we go through our endpoint, and we have a similar router to Rails. This is where we borrow the ideas from the Rails router. We can just say resources, give it some messages, pipe that to a messages controller. That's gonna generate almost the exact similar restful resources that Rails generates. And we can also list channel patterns in our router, and channels are these real-time communication channels. So we're not just routing HTTP traffic, we're routing real-time events from connected clients. And obviously, we're going to have controllers just like you would expect from NBC Framework. But before we look at controllers, we need to talk about pipelines. And this was born out of a use case that we had in Rails that we couldn't solve very well. So I don't know if anyone's ever tried to add different rack middleware in their Rails app, it just applies to all requests. And most often our Rails apps are going to serve HTML and often JSON. So in phoenix, we said, you know what, we can split those middleware. So for a browser request, we're gonna wanna do content negotiation for HTML, we're gonna wanna fetch the session, we're gonna wanna protect from forgery to generate cross-site request forgery protection tokens, and those are computationally expensive. So if we have a JSON API, we can just say, okay, all we wanna do is content negotiation for JSON, and we can skip all the computation for fetching the session because we're not going to be using sessions. So this kind of simplifies the issue that we had with Rails where we can only apply certain middleware to certain scopes in our router. And this has worked out really well for us. And then you can create a scope in your router like Rails, but then you can pipe through a particular pipeline just to apply those middleware. And it's all very explicit in your router. And then finally, we end up in the controller where you can see plug again. So it's the same level of abstraction. So instead of rack middleware and then before action or before filters, it's just you plug middleware in your endpoint, you plug middleware in your router, and then you use plug for your controller. And these are just simple function contracts. So you write authenticate here would just be a function that accepted a connection, did some authentication and returned the connection. So it's a very, very simple mental model of how to filter your requests. And then your controller actions that look very similar to Rails, right? We have an index action as a convention, and then we explicitly render a template. And then we can even pattern match on the parameters and say, okay, I expect an ID to match for my show action, I'm gonna plug that out into a map and then render it to a template. So pretty similar to what you would see in your Rails applications. And we even borrowed the Better Errors webpage. Those folks were nice enough to let us actually just rip that out and work it into the Phoenix stack. So we have helper errors just like you would expect out of your Rails applications. And our generators go beyond just what you would have with like form-based generators. We can generate JSON APIs. So you can say, mix Phoenix Gen, JSON, and that's going to generate a JSON API. And we can even generate channels. So we took this idea of generators and Rails, where the goal is not to have programmers not write code, but as far as onboarding newcomers and having them use generators as learning tools, that's been extremely important in the history of Rails and we borrowed that. And that's worked out very well for us. And just like you would expect from Rails generators, we have the same style of, okay, we can add your validation errors for you to the form automatically. So if you generate a message, a form-based resource, you would have a form generated and then you would get your standard validation errors. So this isn't revolutionary, but it was revolutionary when Rails included these features. So it set the bar really high, right? So our goal was to meet that bar and then continue adding some innovative features that we can only get on the Erlang virtual machine. And one thing that, one lesson I learned with Rails was we did not want to build our own asset pipeline. I didn't wanna spend a year of my life trying to do that and the asset pipeline for me and Rails historically has been problematic, especially with deployments. So I spent about two weeks of my life evaluating different Node.js build tools. There are about 12 dozen of them. I don't recommend it. I'll just say brunch is great. It's super fast and it just works and it's just JSON configuration. There's about a bunch of others and this was my least favorite part, but the point is if you want to do any kind of asset building you're gonna have to have a Node.js runtime anyway. So we said if we have to have a Node.js runtime anyway we'll just use brunch to handle it all. And it's worked really well for us. So it's not coupled. People have been able to replace brunch with Webpack which is another build tool and it takes about 30 seconds. But it just works. And we ship with ES6 JavaScript out of the box. So I'm going to say something maybe controversial and say it's probably time to move on from CoffeeScript. I used CoffeeScript exclusively for three years. So I loved it. And when I first wrote the Phoenix channels client in JavaScript it was CoffeeScript. But we have to make a bet as a framework. Five years out I don't see CoffeeScript being around. I see it being a pain to deal with legacy CoffeeScript code bases. So CoffeeScript changed the game. It served its noble purpose and a lot of the features we see in the latest ES6 JavaScript version are derived from CoffeeScript. It's not quite there. It's not going to have everything you want but it's close enough. So we can see we have like modules syntax. Finally one way to do JavaScript modules. We have this fat arrow rocket, a fat rocket syntax from CoffeeScript where it's going to maintain the binding of this. We have string interpolation with the back ticks because of backward compatibility. So it's still JavaScript, right? But it's nice. And it's nice enough that I think CoffeeScript is in this awkward place for the future. So at least consider going to ES6 because it's the future. It's just going to work. You wait three years. It's going to be supported on all browsers. So it's the only same bet today in my opinion on a transpilation solution for JavaScript. And in Phoenix you just drop your JavaScript into a particular directory. It gets compiled for you automatically. You don't have to think about it. And so we focus a lot on productivity, right? We have great tooling but performance is also extremely important for us. So this is a quote from the Phoenix Logs. You often see microsecond response times. And you often even see microsecond response times in development. So folks will like tweet out on Twitter that they benchmark their Phoenix app. The average latency was in microseconds and the requests per second were super high. And it turns out they were running in development mode which is like 10 times slower. So we're really focused on performance and we get a lot of that for free from Elixir. And one of the ways we're able to get that performance is we have pre-compiled views and templates and we kind of have gone a different way here than Rails where the term view and template in Rails is kind of conflated. We can kind of, they're kind of the same thing, right? We call them views in Rails. But in Phoenix we said views are gonna be modules and views render templates. So our templates are going to be code that we pre-compile into a function in the view and then your views are almost gonna serve as like a presenter pattern. So some people like to use presenter patterns in their Rails views. We kind of add that same stuff, same type of idea. And then our views also go beyond HTML. So we say views are not just for rendering HTML they should be composing any kind of request. So if you wanna structure JSON you should put a JSON transformation into your view. So your controller should just be a glue between the request in your view which is gonna compose the data. And this is a Phoenix view right here. So even if you're not familiar with Elixir we just have a function named render. We can pattern match on show HTML and we can return a string, right? Very basic. And then we can open up our shell and say view.render show HTML it's gonna return that string. So nothing magical here. But this is all that happens when you're rendering an HTML template in Phoenix. It's just calling a function that we've pre-compiled with the name of render and then all that computation is done at compile time and we return the compiled template as it turns into just string cat nation at runtime. So we can look at a real template. This is gonna be an HTML template using EX which is just like ERB, right? Embedded Elixir. And instead of having to do what Rails does is where it says okay at runtime at boot up we're going to pre-compile into an AST of ERB the first time you ask to render a file and then we'll keep that in cache and then we'll return subsequent requests or hit the cache to do the rendering. We are done at runtime. At compile time we do all the work to change this into a function call. And then we call render index HTML of that template. It's actually going to just call a function and return the HTML. But at runtime it's already in memory it's just string cat nation. We do the work at compile time to make this super fast. And then we can do JSON. So our views can render HTML but then we can also say if the controller asks to render show.json we can just give it a map and then Phoenix will encode that as JSON. So we have one place to do response transformations where we can test these in isolation out of the controller and it keeps your controllers very slim and very easy to reason them out. And if we look at performance this is some benchmarks. So benchmarks are inherently untrustful or not trustworthy. So I always say measure yourself. But in the real world we're seeing about 10 to 20 times more throughput from folks that have converted their Rails apps to a Phoenix application. So Bleacher Report is probably the most noteworthy. They converted their Ruby JSON API to a Phoenix API and they were able to go from 20 servers to one server. So we were seeing really highly successful deployments replacing Ruby infrastructure. So Phoenix did about 180,000 requests per second in this benchmark. It was running on a 10 core Xeon on Rackspace with like 92 gigabytes of memory. So big hardware. But we can see that, you know, plug was above Phoenix. So plug is our like Rack analogy or a Rack style abstraction but Phoenix only takes about a 10% performance hit compared to plug. So we're able to add a lot more features do a lot more work than plug but not at the cost of a huge amount of performance. But if we look at what, you know, Rails versus Sinatra we can see that, you know, Rails has taken about three three times hit in performance compared to Sinatra adding similar features. So, you know, performance isn't everything but in our real world use cases and, you know, going from 20 servers to one server it makes a big business sense to actually take performance seriously. Now everyone needs Twitter scale but a lot of people would want to have scale of from going from paying for 20 servers or 20 dinos to going to a couple dinos, right? It's a big deal. And we get this because we have a robust concurrency model from the early virtual machine. We have these things called processes but you can think of them like extremely lightweight threads. We can run like hundreds of thousands of processes on my laptop. So every request that comes in gets started in a process and processes are started in about a microsecond. So we're able to service every request coming in in its own isolated space and garbage collection is done per process. So if we go back and look at the play framework which is a Java framework it was able to get 170,000 requests per second but if we check its consistency of latency it was actually the highest of all of these. And this is because the JVM garbage collectors like stop the world, right? So that 99th percentile of requests that happened to trigger the garbage collector ended up waiting a lot longer. And this is a big deal when you're worrying about users. You know, there's 99% of users where everything looks normal but that tiny percent ends up waiting forever. This is like we're a rap genius, Heroku drama of the past year if anyone remembers that. They had to way over provision Heroku dinos because they were getting these 99th percentile requests that were waiting like tens of seconds. And part of that is because the concurrency model cannot service requests properly and properly isolate the request coming in. And another big deal is the virtual machine load balances processes on IO and CPU. So if we take a look at like what Node.js does and Node.js has like legendary event in IO, right? As long as you do all your work in a callback, you're fine. But the moment you do any CPU bound work in Node.js you take out your entire program. So you have to be very careful in Node on what you're doing. Well with our concurrency model the virtual machine is gonna schedule all work. So we don't have to worry about clogging the tube so to speak. We can spin up processes, do whatever work we need and we can be sure that our system is going to be able to schedule that load properly. So this is how one way we can get a very robust fault tolerant system is our concurrency model gives us a ton of this stuff for free. And we can also do a lot of compile time optimizations in Phoenix. So a lot of things that you would do in Ruby at runtime we can do a compile time to avoid having to do any like runtime caching as soon as the application boots and this gives us faster startup times. And one thing we're doing in the router is kind of neat with the metaprogram model of Elixir we have a lot of neat facilities to generate code at compile time. And one thing we can do is we can change all of these markers we see here like get and resources to a bunch of functions. So a compile time you don't have to know this but this takes the code you see here and it generates a bunch of functions all name match. And the virtual machine lets us name functions of the same name but then we can pattern match on different arguments and the virtual machine is going to then take this and basically be able to try structure. It's going to be able to dispatch this extremely quickly. So as far as Phoenix is concerned at runtime when a request comes in and we want to match that in the router we just call chat.router.match here are the parameters and that's handled by the virtual machine to dispatch that function to the proper controller. So it's very fast and we didn't have to generate any complex data structure ourselves. So I think Aaron Patterson has done some work over the last couple of years in Rails to do this to build up a data structure that allows you to quickly dispatch but it took a lot of work to do it in Ruby and you have to build that at runtime. In Phoenix we do it at compile time and we basically hand off and the VM does all the work for us. And we stole the idea of route helpers from Rails. So your routes are going to generate almost very similar route helpers that you have in your Rails applications and this is just a super nice thing to have so you don't have sprinkled routes that are going to change throughout your code base. So if I have resources and messages I'm going to be able to ask for the message path of some ID and it's going to generate the path back for me. And we do all this work at compile time so at runtime there's no work. And now I'm going to do something risky here and talk about channels but do a live demo and see what happens here. Channels are why I'm here and talking about Phoenix. It's why I took the ideas from Sync and started looking around and it was the first feature that I added to Phoenix. Even before we had a view layer I did channels for trivial real-time communication. And I like to say that we go beyond HTML so we want to really push towards connected devices whether you're on your browser or on your iPhone we should be able to have these devices connected to a backend and talk to each other. So one thing we built recently that was a lot of fun was a collaborative editor kind of like a Google Docs competitor. So for those of you that want to go to Bitly slash Phoenix channels I have a browser window running here and we'll see if we can access this. But my resolution is really tiny so I'm going to blow it up temporarily. So I have two browsers running here. Oh, you can't see it. All right, I've got two browsers running side by side but really any number of connected clients is going to be able to connect and then type a message. And we're going to be able to see it happen in real time. Okay, Metasploit guys, I knew that they'd be here so that's kind of concerning me but it should be fine. And every single key press that you type is sending an event over the channel and it's applying it locally to everyone's editor and this is going to turn into mayhem. But like the amount of messages that's happening right now is probably like hundreds, right? Every time you moved your cursor it's sending an event across the server it's broadcasting that out to everyone listening. And if we can, if I can actually type some kind of expression here without someone deleting it, it's not going to happen. We have some, we've integrated a Wolfram Alpha so if someone, everyone stop typing. No, so here we go. So I do want to show you, so we're doing some computation on the back end so let's say we integrated this with Wolfram Alpha, let's say I want some kind of, something to be computed for me. I'm going to ask, give me a, oh my gosh. Give me a picture of Nick Cage. That was very popular the other day. If I hit command enter on that it's going to call it to Wolfram Alpha and compute the result and then return it to the browser. Yeah. So this took about 150 lines of code of Elixir and JavaScript to make this happen in Phoenix. So it really was a shockingly small amount of code to make this work. And this is going to scale to thousands of users because we're running each of these connections that are listening for events in their own process and since we can run hundreds of thousands of processes each user is listening to pubs of events and then relaying every event that they type to all other subscribers. Okay, so I'm half tempted to not leave the server up and running, but I'll leave it up. Okay, uh-oh, there it is, there it is. So if you went to my, if you went to my training, we are, so the WYSIWYG gives you this obscene HTML and we just like Rails has the raw helper or HTML safe to pass it through and we're just doing that. So I was like, you should properly sanitize this but we're not going to do it. So, so we are secure but for this case I'm just passing a raw HTML. All right, so I'm going to leave that up and running and my laptop dies then we know what happened. Okay, so Phoenix channels, let me put my resolution back down, let's talk about how this works. So if you want to build a chat application, this is basically an entire basic chat application over the next couple of slides. We can route different topic patterns like a room 123 or room 456, some database ID. We can say, I want a channel at this pattern and we want to map that to a room channel. So we're routing pub sub traffic based on a topic to a room channel. So channels are very much like bi-directional controllers from client and server. And then we define a room channel module and we can say, okay, I want to join this channel, we do some authorization and then I handle incoming messages. So for a chat example, I would just want to broadcast a message and relay that to all listeners. And then I could reply to my client that pushed the message and said, okay, I'm gonna give you acknowledgement, this worked out. And then on the JavaScript side, we include a Phoenix JS channels client where we say, give me a connection to the server and we're gonna multiplex all channels over this connection. So you could be subscribed to 100 chat rooms and it's only one connection to the server. By default, we use web sockets, but we're gonna fall back to long pulling for older browsers. And we have this really nice API. So we can say, if you have some jQuery inputs, we can say, when it enters press, we're gonna push a message to the server and when we receive a new message that was broadcasted from someone else, we're just going to append it to the DOM somewhere, right? And then we join a channel and we can even chain or receive. We can say, okay, I'm gonna join a channel and when I receive okay, console law connected, otherwise tell the user that something bad happened. So this is a very, very basic example, and this would give you an actual chat application. And the goal with this is not just to have nice JavaScript APIs. We really wanna go beyond HTML because this is where the world's moving. I'm a web developer and I think the web browser ecosystem isn't going anywhere, but we have smart toasters and smart ovens, right? And those aren't gonna be running browsers. At least I hope not. Someone would probably put a Node.js runtime or something on them. We do have multiple platform channel clients. So what you see here in the foreground is iOS simulator running a native Swift channels client. And it's talking to a browser in the background running that chat app I just showed you with the JavaScript client. And they're speaking the same, it's going over the same back in channel module, but running native code and iOS speaking to JavaScript code in the browser. And the fact that Phoenix has connected to both of these things, it doesn't care. In your channel code, you're just speaking, downing an abstracted socket and the browser, or the client could be coming in over a browser, iOS, and hopefully Android soon. I wanted to have Android support for 1.0, but we're going to just have a browser client and an iOS client, but Android is in the works. And 1.0, Phoenix 1.0 will be out in a couple weeks, so it's very, very close. And the cool thing is if we look at how channels are structured from the outside, we have multiple Phoenix servers that are clustered together. So the Erlang Virtual Machine gives us a distributed environment out of the box. So we have Phoenix servers talking to each other, connected to these clients. So one server might have a browser connected over the long pole transport, and it might be pushing pubs of events, being broadcast out, received on a browser, over a web socket, on an iPhone, over a web socket, and then even an embedded device. So the internet of things is popular, it's becoming a buzzword, but it's like, you know, the next big thing, it's where the future is going. So we have protocols like Co-App, where you could write a Phoenix channel client over the Co-App transport, and it doesn't matter on your backend code that your clients connected over Co-App or WebSocket. You just write code on this shared level of a socket abstraction, and the clients receive the messages. But the reason why I'm so excited that this is going to work incredibly well for us is if we rename these labels, and we look at what Erlang was designed for in the 80s, Ericsson wanted a language to run on a bunch of telecom switches, right, and it wanted to connect a bunch of phones connected to switches across the world. So it's the exact same currency model that Erlang was designed for. They had this particular problem to run programs on switches, connect those switches together and connect phones. It's the exact same problem. So this is how we're going to be able to scale Phoenix incredibly well because we're operating in the same paradigm, like the modern web is the same paradigm that we had in the 80s. Our devices have just gotten a lot smarter and used a lot more traffic. But I'm really optimistic on how many processes we can get in a Phoenix app. I know I've said we can run hundreds of thousands of Erlang processes, but I imagine we can easily see hundreds of thousands of channel processes connected on a single server. I just haven't had the hardware to properly test it. In fact, those rack space benchmarks I showed you earlier, on the Phoenix app, we were only using about 60 or 70% CPU. So we maxed out the network links before maxing out the throughput. There just wasn't enough IO to push more traffic through the server, through the network. And if you look at how channels work on the inside, we have a single connection coming over from the client. And then when it hits a transport layer where you're coming in over web sockets or long polling or co-app, we're going to dispatch that to a channel based on those router definitions. And then a channel is going to each be isolated in their own process. So if you're doing work in a channel that's really computationally expensive, you're not going to block the tubes. Even if one client's connected to 10 channels, let's say, and one of their channels is doing some Wolfram Alpha computation, it's okay to be doing that. We're only going to block that one channel. The rest are going to be running on their own. And if they crash, only one channel crashes, not the entire app. So one thing that, we took some inspiration from Sakurai-O from Node.js. But one thing you have to be careful of if you're using a Node.js app, let's say we had a bunch of WebSocket connections with Sakurai-O. If we let one of those clients throw an error and we didn't catch that error, that would actually bring down the entire event loop and bring down the entire program for all connected clients. So one error in your app could bring down all clients. So in Phoenix, because of our concurrency model, everything is isolated. So our error recovery is very local to that process. And by default, our standard library, we use PubSub and the standard library to talk distributively, but we have a fallback to Redis for Heroku because Heroku, you cannot connect dinos. Dinos cannot talk to each other over the network. So we pull back to Redis. So Phoenix is just going to work on Heroku with one line change as long as you want to use Redis. Otherwise, if you're deploying, you just run Phoenix. There's no other separate dependency to make all this work. You run a single Phoenix server and you're going to get all of your PubSub broadcasted for you just by virtue of the standard library. And we have a synchronous messaging. So one thing that you're going to need when you're pushing or broadcasting a message from the client is they want to know what happened. Like they want a specific request response style message over WebSockets, right? And this is what we've provided. So let's say for my application, I want to push a new message to the server. I can then receive the result of what happened to that particular event that I pushed. And this is one thing that we didn't have initially, but then folks were trying to push a message to the server and then they would try to broadcast back, but then you would lose ordering. So this is one lesson we learned along the way that we need a way to do request response style messaging over channels. And we can even say, if you're familiar with Elixir, we can even say after five seconds, show some loading spinner. This is very similar to how you would send and receive messages in your normal Elixir code. So this maps really well when we look at the client and server. We can say, let's say we have some block of code in our channel that's going to say if we inserted a message properly, we can reply to the client, okay, here's your message. And then on the client, we can almost pattern match. It's fake pattern matching, but we can say, okay, when we receive okay, when the status was okay, we're gonna get that message and add to the DOM. Otherwise, if there was an error, we can say reply error on the server. Here's the list of error messages. And then the client can say, when I receive an error, grab those errors off and show them to the user. So it's a very seamless integration from client and server using the channel's client. And the last thing I wanna talk about is the way we structure applications, because this has been a problem for me building out Rails applications historically. So one common thing you'll hear is like Rails is not your app, but Rails really runs the entire show. But what we use in Phoenix is everything is an application in Elixir. So everything you write in Elixir is gonna be run as this concept of an application, and there's no way around it. Everything has to be run as its own application and we run things in a supervision tree. So we have supervisors that wash over the different parts of our application. If one of those parts crashes, that supervisor is gonna be responsible for having the system recover. So if we have an example here, let's say we're building a search engine. So this isn't a website, this is just, well, let's build a search engine and we're probably gonna need some kind of crawler, so our supervisor is gonna watch over this crawler, and that crawler is gonna be a supervisor that has a bunch of crawler processes crawling a bunch of websites at once. If one of those websites has malformed HTML or some issue, it's gonna crash, but then the crawler can say, okay, why don't you retry that? The crawler is gonna be responsible for watching over those operations. And then we'll probably wanna have some indexer watching over the crawler operations, and then when the crawling happens, our indexer can then index the results, right? So we build this and we get massive VC funding, and we say, okay, now we want a web front end. We've got this really amazing backend that can crawl websites. Let's add Phoenix so we can have a search form and deliver results to a browser. So you might say, okay, perfect, microservices. Microservices are latest buzzword. Depending on, last time we visited Hacker News, they're either the next best thing or someone tried it and it was really, it worked out really poorly. But for us, we're removing this from our vocabulary. We'll be done. Microservices is not, we don't even wanna talk about it, because the way we build out applications in Elixir is they are microservices, but it's not a big deal. And we don't have the caveats of microservices where we say, okay, let's go with microservices. And then, okay, it's like, well, they need to talk to each other, so let's build a JSON API on our search engine and our Phoenix can talk to that JSON API, or let's use like a message queue or Redis or RabbitMQ. It's like, we don't have to do any of that because our concurrency model is distributed and we can just talk to processes so applications can easily talk to each other. So what we do instead is we say, okay, this is what our application looks like. We had that search engine. We have a Phoenix app. Phoenix app is just this application with its own supervision tree. Phoenix is going to have some kind of supervision tree where we have like a TCP connector for web requests and we have some PubSub process. So Phoenix is going to monitor those things for you. If your PubSub process goes down, like if you're using Redis on Heroku, maybe your Redis service is offline, it's going to properly keep trying to reconnect to that for you. You'll see error messages in your application but something's watching over that. Like that event machine thread issue I had in sync in Elixir in Phoenix, you would have an application supervisor basically saying, something happened here, I'm going to try to restart. If I'm unable to rectify the situation at a certain amount of time, the supervisor itself would actually crash and then its supervisor is going to try to restart the entire chain. So you have these systems built up watching over each other. This is what gives you the nice, robust fault tolerant applications that you hear about. And we have really nice monitoring. So one thing that you're going to want in a robust system that you don't want to have to go offline is you want to have real-time look into what's going on. So all these processes that we've spent up, we're able to get in real-time while the application is running a list of processes, we're able to see load charts. In this example, I don't know if you can read it, we have a database repository that has a pool of database connections. So this big tree that you see here is the pool of 10 connections. And then the database pool is going to be a supervisor watching over each of those connections. So if one of those happens to go down or the database itself happens to go down for some reason, maybe we have some network issues that supervisors go and keep trying to restart the database for you automatically. So we have really nice monitoring into what's going on with that. How am I doing on time here? Gotta get going. Okay. So we focus on long-term productivity. Monitoring is what gives us that. And we have really good reasoning about when things go wrong because of our error reporting. So it's good timing anyway because that was my last slide. So if you wanna check Phoenix out, we have a great docs at phoenixquaremerc.org. We have dozens of guys already, so please check that out. It's very easy to get up and running. And if you're new to the Elixir ecosystem, I also worth check out elixirleng.org. They have great getting started guides. So if you have any questions, find me after the talk and thanks a lot.