 So, let's get this party started. My name is Renee, which is also my handle on GitHub and Twitter, or like Justin Schneck would say, the Twitter. You might probably know my project Credo, which is a code analysis tool for Elixir that tries to give guidance and human-friendly results to the user. But this talk is not about Credo. It is about how Jose inspired a site where people can post their Elixir creations. It is about Elixir status. To tell you the story, I have to take you way back to when I created Inge, a tool which analyzes inline code documentation, although that's not really important. What's important is that this was my first Ruby project with a real purpose, and I really wanted to promote it. I wanted people to see and use it. And I discovered the right place, Ruby Flow, which is a Ruby community site where people can post stuff they made. This is a cool thing in and of itself, but I noticed that a certain Jose Valim was posting to the site about inherited resources, a project of his. And I thought, holy cow, do you realize what that means in terms of visibility? Somebody accomplished like Jose will post an update saying, look, here's an update to something I made. And right below that, in the same space, a complete wanky newcomer like myself could post, ha ha, and here is something that I did. This really impressed me. So a year later, when I had created my first Hacks package, some volunteers and I created this site. It's called elixirstatus.com. It lets you post your project updates, announcements, blog posts, books you've written and meetups you've started. You can do so by signing in via GitHub and posting simple announcements in Markdown. And the twist is only the creators of stuff are supposed to post their own stuff. If we take a closer look at the site, we can see why. There are all kinds of postings, an announcement for a Kafka-related project, and below that a blog post about Phoenix integration testing. The reason why we only want the actual creators to post their stuff, instead of letting everyone post what they found on the interwebs, is simple. When the person, whoops, when the person posting a project or blog post is the actual creator, it makes sense to link his or her GitHub profile. It also makes sense to optionally let them link their Twitter profile, because in the past this has led to interesting discussions in the Twitter world about the posted thing. And finally, it is that easy to retweet stuff you like. Now this has become a great tool, and real people are using it. And you can subscribe to all these updates via the social channel you prefer, so you don't necessarily have to visit the website every couple of days. This is because I thought of Elixir status as a social tool. It was meant to be infrastructure for all of us. Chris mentioned what a young community we are. And because we are such a young community, it is important to give our open source projects all the visibility we can. And another important thing here is lowering entry barriers and welcoming newcomers to this community. The latest of these channels you can subscribe to is Elixir Weekly, which is a weekly email newsletter about all things Elixir. It is in a way my personal take on the community, including all the stuff that's posted to Elixir status. I'd like to close with a very special thanks to Johnny Nguyen. He created the MyElixirStatus hashtag on Twitter, which many of you will know, and which obviously inspired the name for ElixirStatus.com. He has always been someone who inspired me. Hey, everybody. My name is Pika Mosh. I work for a company called Apcuse in Boston. I write Elixir a lot. We run it in production. And I'm going to tell you about a library, a very small library that I wrote to make my life simpler. It is called GenRetry. As you can probably tell from the title, it's a generic retry. It provides retry with exponential back off, configurable, configure your delay, configure your exponential base, a whole bunch of stuff. All you need is a function that will raise an exception when it fails and won't when it works. It's real easy. I encourage you to check out the code on GitHub. It's a real quick read. So I'm talking about me. Now you, everything works. Everything works the first time every time. I can really only imagine what that's like. And so I need tools to help me along. On a more serious note, your code is not that great either. And a lot of it, I'm going to put the blame squarely on other people. I mean like you connect to a network service. Maybe it's there. Maybe it's not. Is it written in Elixir? I'm not sure. So you're going to have to retry things. The let it crash mantra is great, but sometimes you don't want like, you don't necessarily want to set up a supervision tree for a really small thing. You don't want various parts of your system coming down earlier than you want them to. So if at first you don't succeed, try, try again. Quick note on exponential back off. I'm sure a lot of you are familiar with it. What it means is that in between retries, so you try something that screws up. You do not immediately retry. You wait a little bit of time, let's say one second. It screws up again. You wait two seconds and then four and then eight. Now that's an exponent of two, but essentially the idea is that every time you need to retry, you back off even a little bit more. I'll talk just a little bit about jitter. Jitter is a way of adding a degree of randomness to when something is retried. So for instance, a jitter of 0.5 or 50% on a delay of one second would mean you would wait between one and one and a half seconds randomly selected. And this is useful if for instance you have 500 servers all hitting the same external service. The external service is down for a moment. And then everything waits 10 seconds and just hammers it again. You don't want that. It doesn't work as well. So the basics. There are two main functions in gen retry. Gen is just called gen retry.retry. It handles background processes where you don't really care about the output value and doing something with it. So it's considered a replacement for spawn link. And then we have gen retry task async. And that is a drop in replacement for task async except it provides retries. And if at the end you go through the specified number of retries, it blows up in exactly the same way that task async would. We have this background process. We don't care about the output value. We just want to know that the TPS reports have been delivered. A lot of people care about that. But what happens if this screws up? I'm feeling sad and scared already. But with just one extra line of code, we are able to say, hey, just do this a bunch of times. Maybe it'll work. Maybe it won't. But if it works out of one of those five times, then that's one larger crash that your service does not have to take care of. A spoiler, this is what happens in the next frame. Okay. So task async, this is when you care about the return value for something. Other languages would call this pattern futures or promises. The idea is that you can start a bunch of asynchronous tasks to go do stuff and then sit there and collect all the results at the end. But again, what happens if it doesn't work? Yeah. Here is a simple example of how to use GenRetry task async, and it's pretty much the same thing. Drop in replacement, specify how many retries, how long you want your initial delay to be, and it, Bob's your uncle, pretty much works. In conclusion, real short talk. There are no small speaking engagements, just small speakers. Let it crash, but on your terms, you get to decide what part of your program falls down, how big that part of the program is. And GenRetry is a tool for helping you out with that. We have a few other open source tools. X constructor is good for structs. Stifle will trap return values, or sorry, non-return values, side effects, and then release them, and other than that, see in the pool. Right, so I had to do something in Elixir, just a very simple task. Can you hear me now? Right, so my name is Faheem, I work for Deliveroo. I started learning Elixirs about a year ago. Just small task, I did some small things. So one of the things that I did was I had the remote API that I had to carry and get a number of requests, and it was about half a million data points that I needed to get. And the remote API was, it was not very performant, so I had to scale it down. The number of concurrent processes that I can ask it to do was limited. So, but I didn't know what that limit was, so I needed to have concurrent processes, but how many or what it was I would have to figure out. So first, I needed the background processes. The second was that I wouldn't want to overwhelm the remote API. So for that, I wrote something that, like coming from Rails, is like a rescue worker, some jobs that needs to run in parallel. So for that, I used pull by. So to use pull by, there's a simple way to, like, I would show it like how I used to pull by. So for using a pull by, you have a number of workers in the pull by that it manages. You can ask it for a single process from it, a worker from it. You ask that worker to do some job. Once that job is done, you check back, check that worker back in. So what I was doing was, I had a number of rows that the data that I needed to post to this API. That API does some calculation, it would give me back some results. So I would get that row. I would send it to the service. The service will calculate something and I would get the result. I would put it in database and that was the task. So I ran a pull by, pull by with 10 workers in the start. So those 10 workers, I would give them job immediately. So 10 jobs at a time. The 11th job, my process would stop. My process will wait because it's trying to transaction, to get the transaction, get that worker. So to get that worker, it would wait until one of those 10 got free, checked back in, and I could get hold of that worker. So this is the code I wrote. This is simple. The point of this talk is just to show how simple it is to do background processes. So if you look at this method, send to pull, that's all we need to do to actually run background processes. You run a transaction. You tell what your gen server's name is, the pull gen server, and I will show how that is set up. But the second is a anonymous function that takes in a PID that where you send what to do. And that's it. That's all you need to do to actually make it work. So for actually setting up the pull, I would show how simple that is. It's just you have to have a pull name. You have to have what workers you want to use for it, how many workers you want to use for one, and how much overflow, like how many extra workers, if there is a rush, there is a peak, and you want to solve that, how many extra workers you want. That's all that's needed. Thank you. All right. Hi, my name's Rockwell Schrock, just pretty much a newbie to Phoenix and Elixir. I don't know about you guys, but this is me. At the past couple of weeks, learning all this cool stuff. This is an authorization, I almost said gem, authorization package that I extracted from an application that I'm working on. It's called Authy. It's very similar to Pundit in the Ruby world, if you're familiar with that. The way it works is you define a policy module that basically just has, it's not quite a, it has a certain convention that you follow. If you have a post, or in Phoenix it can be myapp.post, then you have a post.policy module that it expects to look for for authorizing actions. Cool thing that this has, that Elixir has over Ruby, is normally in Pundit, the action that you're authorizing is the name of the method itself. Here, you can just use pattern-matching to do the authorization so much so that you don't even really need anything in the body of your function in a lot of the cases. So for here example, admin user can do anything. A certain user can only modify a post if they own it. If the post user ID is equal to the user ID. And then there's also, you can do like a catch-all, so deny all actions for everyone else if they're not authorized. This has a cool side effects in that if you forget to define, cover a certain case in your policy module, it just throws an error and you know that you've missed a certain case. Here's a quick example of how it looks. Just for example, show a bunch of users, admin and post and it just shows you the different ways in which you can call it using this authorized method. All it's doing is just really simple. It just follows the convention so it knows that admin is a structured type user. It looks for user.policy and just calls the can method on that function. You can also handle the case of a nil user. For example, a guest user is not logged in. You can handle that in your policies as well through pattern matching. And if you don't have a particular instance of a resource you can just pass the module name itself. For example, for an index action. Another idea I stole from pundit is policy scopes. So certain users might only have access to index or list certain resources. Here I'm using Ecto but it doesn't have to be. There's no dependency on Ecto. You can just use a scope to figure out, again which user can access which things for a certain operation. Again here admin can see all posts. User can see only posts that they've owned. And maybe a guest user can only see posts that are published. So all that stuff is not at all specific to Phoenix. I've also included a Authy controller module that you can include in all of your controllers that has a couple of macros to sort of help and reduce some typing. So in your index action, well here we've already imported Authy controller in the WebEx file so it's in all your controllers already. There's an authorized macro here which you can pass the resource that you're authenticating or authorizing. And then if it succeeds you do the block. And if it fails you can define an number of actions through a callback module that you defined in your config. Here we're authorizing the post for an index and then scoping it depending on the user. You can also use scoping for when you're retrieving a particular thing. The scope is designed to sort of start building your queryable so that when you pass it to the repo for retrieving it you know that it's within the right context. Like I mentioned you can customize how failures are handled. For example you might want to handle when a resource is not found or whether it's unauthorized and you can change depending on how important your resource is you might not want to reveal that something's not found. You might just want to say it's unauthorized depending on the user's permissions. And all those defaults that that macro kind of assumes about your controller you can override for any sort of weird edge cases. You can change which policy module it's looking for by default. You can change the action and so on and so forth. There's a couple of recommendations in here for sort of how to structure it. One of the cool things I thought about is maybe you have a control that's not particularly like a RESTful controller. There's no real resource behind it. You can just define a policy for that controller. And you just pass it and that way you can authorize actions on that controller. Looks like my time's almost up so three, two, one. Thanks guys. Okay so this is a story of what's happening in production over like the last month in our Elixir app. We have a mix of Ruby and Elixir apps and they're very similarly structured. They have to talk to each other over a Rabdom queue. We use the RPC pattern so we have clients and servers on both sides. But we were getting memory leaks. The Elixir app after a little while depending on how much traffic we got would just run on memory and crash. We had memory monitors from our hosting provider and it would be a nice little ramp. Every request memory got bigger and bigger and that's because process leaks can equal memory leaks. So in our Phoenix controllers we would start a GenServer with start link that Phoenix controller process would exit and the GenServer would keep running. It took us a while to figure that out though because our hosting provider didn't allow us to run observers and I kind of hacked around that and then I figured it out. And the reason for this is that a Phoenix controller exits as a normal exit and normal exits aren't caught by GenServer when you do start link. What you have to do to get that is to trap exits which is process flag trap exit true. If I'd actually read the docs really carefully and understood the exact wording it actually says as much in the docs for GenServer start link. Another problem we were having is that our RebitMQ kept crashing with too many queues which I didn't even know was possible. Like it's able to just, there's too many queue names and it'll go down. So we thought we were being safe because we followed the docs and we used auto delete and exclusive. For, we use MQP queue declare in Elixir and we use the bunny gem in Ruby. They both say they're exclusive, they both say auto delete but we get different behavior because in Ruby the client creates the connection, the client creates a channel on the connection and when the client is destroyed just through normal garbage collection of Ruby the connection gets destroyed. Meanwhile in Elixir because I wanted to be responsible I actually read the RebitMQ docs and it said you should pull the connection so I put the connection in a GenServer and then anyone can just ask for that connection so I don't create a new TCP connection to the RabbitServer all the time. The client creates a channel on that connection and then the client's destroyed. The bug is already fixed so the client really does clean up. But exclusive doesn't delete the queue until your connection is lost. We only lose the connection in Elixir if Rabbit actually restarts. So exclusive won't do cleanup for you if you're keeping your connections open to RabbitMQ. An auto delete doesn't delete your queue until your channel closes and it turns out even with the trap exits and terminate unless we explicitly close the channel auto delete won't clean up the queue either. So even though both those options are documented as oh this is how you make sure all your stuff is cleaned up in Rabbit you have to very explicitly clean up after yourself. And there's also a gosh on auto delete of that the automatic delete won't happen if no one ever uses the queue. So if you time out and you don't get a response and no one's used the queue that can also lead to it not being cleaned up. Yeah, that's it. So has anybody ever heard of Ratchet? Mom, mom put your hand down back there. No, no, no, no, you don't count. Yeah, so Ratchet's this idea that I'm playing with. It's sort of this thing. So I feel like there's room between the traditional server rendered apps and then these really complicated thick client applications. And don't get me wrong, all these front end frameworks are really, really awesome. They do a really great job for what they're great at and that is they're great for building applications, right? This is things like the Adam, text editor and the Cloud9 in browser text editor things like that. But that's a lot of complexity. When we do that we have to maintain two separate applications. We have our back end application and our front end application and they're talking to each other and that's really hard. That's a lot of complexity for simple web pages. Really web pages are often just documents. These are things like Amazon and your marketing website for your company. That's just simple documents. So to introduce the complexity of a server, server or client JavaScript application, it may be a little much. I saw a talk recently by Sam, Sam Stevenson at RailsConf this past few months. And he's talking about turbo links and he presented this information I thought was pretty good how in the past 12 years the complexity of applications has really, really grown. It was really simple when Rails first started and now we've got all this JavaScript happening both on the server and the client and it adds a lot of complexity. I would encourage you to go watch this talk because it's super, super interesting. But you might be thinking if we go back to the simple thing, we're gonna be missing something really great about new applications and that is we wanna be able to see live data updates as the state changes on the server. So we go back to this model of the model view controller and the request response cycle. There's really no way for the client to go to be aware of state changes on the server. So that's where this tool ratchet comes in. So it's kind of this little thing that you bolt on to that traditional view and it's a data stream that communicates, the client communicates with the server and hopefully it's really simple to use. Wouldn't it warn you? I don't really know what it is yet. I've been playing with some tools and I've got some things built but I would say they're not even close to ready. But as a high-level kind of overview, what the idea is is you'll describe the page content as a data structure and then use that data structure to represent, to build the view. So it comes in two parts. One part's on the server. This is a templating language. So you might have templates that look like this in your application. This is EEX. You've embedded some elixir code in your view. I'm thinking maybe we could do something like this. Well, we have annotations that we add to our view. Our views are plain HTML. One benefit of this is it's really, really friendly to our designers. So they can deliver prototypes to us in plain HTML and those prototypes become the literal templates that we use at a power application. Then we prepare a big data structure like this and present it to the view and the view reacts by adding that data to itself. So the second part is a client and this is a hopefully zero configuration thing. It might look like this. But just that it is, I don't really wanna write JavaScript. I wanna just require a library and it just work. So let me show you a quick demo. I recorded it so I wouldn't mess it up. All right, so the first thing that I do is add, oh, oh, you can't see it. Full screen, go! Yeah, so the first thing that I do is add this socket. You can see at the top, I'm gonna hit play so you can see it keep going. The next thing that I do is create or add a JavaScript dependency. That's the library code, the client code. And then reference it within my application. Is this going at 2x? There we go. All right, so then I add to my JavaScript app document that single line of code to require the library. And then within this model, for lack of a better term right now, we have this module that creates a thing called an action. And an action is like a way of mutating state on the server that is aware of the client. It actually broadcasts the changes that are made to the client. So it knows that when you create a message, I should tell everyone what the messages look like now. And then in our controller, wherever you actually do the creation of that model, instead of using the add thing directly, we use the create action. And then finally the last thing that happened is the view was updated with an annotation that turns it on. So let me show you this thing working real quick and we'll be done. Hopefully I'll have it just a second. I wasn't planning on this thing. Okay, here we go. So I say hi here and you see that happen. I open up a new window, kind of move it to the side and say hello and you see they open and they update themselves live from both sides of the thing. Cool, yay, thank you. All right, so super speed. So first things first, I'm doing a workshop on Elm in the Osprey ballroom right after the Lightning Talk. So if you want to learn some Elm, get over there, we're gonna build cool things together. So let me go through a talk as fast as I possibly can in super condensed form. So I want you to build software better. This is what you spend your life doing and we all die eventually. So that sucks. Software is the biggest lever the world's ever seen in terms of making change in the world. So the best thing to do would be to make the lever longer. So that's what the computer scientists do. But it's kind of irrelevant because basically everybody in industry is kind of sitting as close to the fulcrum as they possibly can, like pushing as hard as they can and when they want to make more change happen they just bring on more people and push harder. So yeah, anyway, the best thing is to move further down. So we can help people move further down and functional programming is one of those things. We've known it was better for 50 years and people still suck and don't use it. Anyway, so I think it's important because of some math that I did for myself a few weeks back. So if you want to be like a successful company, typically you have like a business model and it depends on having like a lot of customers unless you're the routine clan. So let's assume that you're working on really successful software and do some math. You have 10 million users, kick ass, awesome. Sorry, sorry. To get the value out of your software they need to visit an average of like 10 pages a day and you built the software really badly because you're not good. So each page load takes on average six seconds but that's okay, like nobody does this. We don't have to deal with this like all the freaking time everywhere we are, like when you're in the biggest hosting platform for developers ever, that's not a thing that's wasting our time. All right, so who cares though? 10 million minutes a day is actually what that is. So that's not an easy number to wrap our head around. It's like millions, what's that? So that's 166,000 hours a day or 6,900 days a day or 19 years a day, which these are all weird metrics. It's a lot of time to stare at a loading screen but who cares? Under later note, people live eight years. Congratulations, your software kills a human every four days. So it's software also so it never sleeps in its blood loss. So I kind of want to propose that our metrics dashboard should track a mortality rate right alongside the other metrics because I think it's kind of important. But like, why don't I bring all this up? So I think you have an obligation to do your job well because you're killing people. Yeah, so enough about murder, let's talk about computer science. This is John Backus. If you don't recognize the name, you'll recognize a couple things he's known for. One of those is Fortran. In 1953, he worked for IBM. He convinced his superiors to let him build a language that made it easier to work with equations and like stop coding assembly directly. And so it's an imperative language, super-duper imperative. It's been developed ever since 1953. So maybe the longest lived, like extant language people are using. It also supports parallelism and object-oriented programming these days which is just fantastic to me. So here's an example of Fortran. Super-duper imperative. And when I say imperative, I always mean like recipe style. So do A, do B, do C. Cool, stuff happened that we wanted but there's no way for the computer to know what we wanted, right? It's essentially just a thin shim on top of assembly just like CS. He also invented what's known as the backest normal form or backest now form. And it's a means of defining the rules of a context-free grammar. So like every programming language that you know is some probably has a BNF definition, right? So here's Jason. If you read RFCs, you're gonna see BNF all the time. My point in mentioning all this is he kind of, he's responsible for both one of the longest-lived programming languages and the means by which people develop programming languages. So he's kind of an expert on this stuff. And I know like appeals to authority suck but that's what I'm doing. Anyway, BNF won him the Turing Award. So he actually presented this talk in programming be liberated from the Von Norman style. So like what's the Von Norman style? There's this guy Alan Turing, right? In 1936, he came up with the universal Turing machine. So it's the foundation model that all of our computers work on. And it consists of an infinite tape or read, write, head, all this stuff you might've heard of before. The only thing that sucks about this is that you can't actually build one because we don't have enough infinite tapes. So this guy came in and in 1945 he described the machine that they built to do some math for some reason and the machine was an actually constructible realization of the Turing machine. And so it's basically exactly the same thing you have on your laptop right now. This is how it works. You have your CPU, some memory, a bus between them, neat. Let's come back to back us. So here's some code. This is Algol, who cares. It is basically, this is like in a product. So that's all right. This is imperative form of in a product. He found some things he didn't like about this. So there's like invisible state, ABN. It's not hierarchical. It's dynamic and repetitive. And it does some other things that he hates. Here's the thing it does. So it goes over that bus constantly to get memories. That's what happens when you assign something. And so this is actually really slow. And so you might have heard of the Von Dormann bottleneck but he wasn't actually talking only about this. He was talking about also in your brain there's a bottleneck because you're having to do all this mental computation or remember, oh, what's where, bookkeeping. And so he said this sucks and it's kind of ruining everything. He said this would be better. This is functional programming. We're just combining or, you know, collecting three functions together. And yeah, that's nice. So nice thing. There's no assignment. Everything's composable by default. They're functions. And you can do algebra and programs and that's super good. So if you like functional programming, do more of that in Elixir. Also try on, come over there. I'll show it to you. I'll show you why it's neat. Thanks. All right, my slide notes are not on there. So, all right, let me tell you a story. My name is Ian Worszak and I am a developer. I've been doing Rails development for the past 10 years and Elixir development for the past year or so. And I'm gonna tell you a little bit of a story about me and I say this not to scare you guys but to prepare you guys because I'm like all of you. This is me four years ago. I got a strep throat which turned into pneumonia, which turned into septic shock, which turned into multi organ failure. And I ended up in the ICU in a coma for nine days and ended up in the hospital for over weeks and months. And as part of that, I lost blood flow to my fingers and my feet. And if you can't see me, my hands, my fingers are missing. And if you haven't seen my legs, I have two prosthetic legs. This is my family. This is my music playing next door. This is awesome. So when I woke up, one of the first things I thought was, how am I gonna teach my kids how to play baseball? How am I gonna teach them how to play basketball? How am I gonna drive? But how am I gonna work without fingers? How does a program work without fingers? And honestly, it was kind of a crisis in my head. I didn't know what I was gonna do. Lesson one, priority, fabric factor. This is easier said than done. But I would encourage you all to think about what if you couldn't do what you're doing right now? What would you do? What do you have a passion for? I have a passion for programming. So even with black, red, and necrotic fingers, my friend helped me figure out how to hook up a stylus to my wristcard. And within a few weeks, I was poking at that on my computer and I actually started programming while I was in rehab or continued to program while I was in rehab. Lesson two is having support system, family, friends, people that are gonna lift you up. While I was in the hospital, I had a lot of support. I left you back at the familiar next door. All right, let's just keep going. Lesson three, get disability and life insurance. It is important and it is cheap. Get term life insurance, get disability insurance, all of you all. I cannot encourage that enough. How do I work? People ask me this all the time. How do you program without fingers? And that's a very fair question. I wondered the same thing when I was in the hospital and they told me they were gonna have to amputate all of my fingers. Well, the answer is practice. I'd use a few adaptive tools like Dragon that doesn't really work very well for programming. So now I type kind of at the hunt and peck speed that you see your grandpa probably doing. But I found that for programming, it doesn't take as much typing as I first thought I did. I type a little bit. I think a lot. I read a lot of documentation and then I type a little bit more. So I feel as if I'm just as productive as I was before. Since then I've been able to do some things that I never even did when I had feet, which is me running a half marathon and of course, kissing my bicep while I'm doing it. Also, I'm out killing the dry, which is the highest mountain in Africa. And it is the tallest mountain in Africa and it's 19,000 feet. So again, that's something I never considered going before. And lastly, I represented the US in the 2016 Rio Olympics during synchronized swimming. So guys and guys, you never know what you can do. So challenge yourself. That's all I have, thank you. Okay, hi, I'm Casey. I work at a company called Netflix. For those of you who aren't familiar, Netflix is kind of like Pokemon Go, except instead of hunting Pokemons one at a time, we give you all of them at once. And instead of Pokemons, they're movies. And instead of leaving your house, you can just kind of sit on your couch. But the reason I make the comparison is because like Pokemon Go, Netflix is really popular. It's so popular that we're over a third of the bits on the internet at peak, which by our calculations is a lot. Which means that we ran into some scale problems that a lot of other companies hadn't yet run into. One of those, our control plane being deployed on the cloud, is that we have so many servers at any given point in time. We always have servers disappearing. It's a feature of the cloud, if you will. And when those servers disappear, it's usually at 3 a.m., if it happened to be an important server that disappeared, then we get paged and we'd be annoyed that it was so early in the morning. So we created something called Chaos Monkey, which turns servers off in production, but does that during business hours. And this was great because without going out and giving our engineers an edit, like you have to make your system fall tolerant, this created very strong alignment for them all to solve that problem. We took the pain of being on the cloud and brought it forward, and the engineers solved the problem by themselves. And so for the past four-ish years that this has been running in production, we don't run into the problem of our servers being disrupted by nodes disappearing, servers crashing. So this was really useful, and it's very fun to put on your professional biography that you break shit in production. But we thought, you know, there's gotta be more to this than that. How do we take this practice and apply it to other things? So we formalized it in what we call Chaos Engineering. You can go to principles of chaos.org to read the full description. Basically, chaos engineering is a practice where you take a distributed system like a microservice architecture, and you're not trying to create chaos. What you're doing is the system, you're assuming that the system is already chaotic, and chaos engineering is a process of surfacing the systemic behaviors that you're aware of it, and if it's bad, you can fix it. So for example, you know, the server disappears, the service goes down, you'd wanna know that. And that can be very hard to model. Small change in your microservice over here could have a huge impact on the behavior of microservice over here. You wanna be able to surface that. Chaos engineering with some best practices at scale of how you set up experiments to run continuously to find those kinds of problems. We thought this was cool, so did some other people. So I organized Chaos Community Day last year. We held it at Uber's office in San Francisco, Uber, Google, Microsoft, Yahoo, LinkedIn, Facebook, Dropout, the usual suspects came, and we all discussed best practices. That was cool. We did it again this year, Chaos Community Day. We just had it last week in Seattle at Amazon's office. Again, the usual suspects show up as well as some startups. And so we're starting to see that chaos engineering is an emerging discipline within software engineering, which is great. So if you have an interest in this, go to principlesofchaos.org, we have a Google group, or you know, come huckle me. Thanks. So my name is Przamek, and today I would like to show you a little project that I'm working on, and basically what it is about. It's a little browser-based game, a multiplayer game that heavily relies on real-time elements, and to make it, I'm using Elixir together with Elm, and I'm really looking for some feedback because it's a big learning experience for me, especially when it comes to Elm, since there's a lot of very new concepts for me in there. And to start some of my background, the background of the project, I actually spent most of my working years running a browser-based game. It was mainly a text and image thing, and it was my full-time job since 2007, but I wrote it even before that. And initially, I wrote it in PHP, and it was not a good choice. I started accumulating technical adapters really quickly, and after many years, many years later, I decided to switch to Rails, but that was also a problem for me straight away because I straight away run into scaling problems. I had a lot of users already, a lot of different assets, a lot of different components in there, and it became clear to me that I would have to spend a lot, a lot of money to even get it working in the same way as the PHP project was working. So in 2016, I decided to take a little break and dedicate some time to learning and maybe figure out how I can get this project off the ground again. So just to show you how it used to look like, you can see a few screenshots from the game. As I mentioned, it's mostly images, text, and pure HTML. And another thing that I want to mention is we used to host this special thing, an April Fools event that happened, almost every April past. And during that time, I usually published a really small project that was really crazy, really silly, and for me, it was a way to test new ideas and new technologies. And just to give you an idea of what that was about, one of those projects was a game when you had your own petrog and you send it on some crazy adventure, like flying and balloons and stuff like that. Yeah, so the current game is also in the same vein. It's completely silly, completely cartoonish, but this allows me to try different concepts, do really crazy stuff, and when other things happen, and you can incorporate them into the game right now. You can see some crazy musings next door, it kind of like fits into the sky and so on, so this atmosphere. So you're just really quickly taking concepts that I'm using, I'm using, of course, Alexine and Phoenix. I'm using Braille apps, and Phoenix is just basically a work front and all the other moving parts in separate umbrellas. And I'm using, as I mentioned, I'm communicating heavily with Alexia channels and handling all the complex front and stuff in there, but at the same time, I'm using jQuery for just some simple things, maybe some animations, not like that. And my current idea is to release everything as soon as possible and then send an email or a Facebook message to the players from our previous games. Like, even on Facebook, I have around 9,000 people, so I really want to invite them and see how everything works, like buy a really cheap hosting around up there and see how it looks in real life. And yeah, I just want to mention a few things that I already learned during this conference, because I had some, for example, issues with releases when I was experimenting with them, and it looks like distillery handles those things really well, and at the same time, I had some, maybe not problems, but some different ideas about how to structure my app, and it looks like new version of Phoenix takes care of some of that. And yeah, that's it, I've got the link at the top, you can get the link to my GitHub repo with the game and open source there, and since it is my handle on Twitter, thank you. Hey guys, good afternoon. My name is Fernand Gallana, I'm a kind server on Twitter. How many people here really doing an extra finish of your room today, the presentations, the keynotes, the conference organizers? Well, make some noise, we're getting kicked out here. Come on, yeah, cool there. All right, so let's talk about go. If you don't live on the edge, you're taking away too much room, right? So let's talk about that. So I think the community is great, and I'm sure most of you have run into issues when you come to develop a vehicle application, and then you get to the point where you need to deploy it, and that's where the story kind of gets a little break, right? To me, I think to reach out and either use like Capistrano or Bash, since it feels a little bit Jurassic when it comes to deployment, especially in 2016. So I'd like to introduce the new concept it's called the mode of presenting. It can't go like that, so you don't feel and kick me off the stage, so you can. But really, I want to talk about Kubernetes, which is an orchestration framework that was developed by Google for many years to manage and orchestrate thousands of servers in the club. And then I've spent a lot of time with Kubernetes and playing with it, and this is something that I think we should, or could leverage, to deploy our Elixir and fix applications. So let's go through this really quick. Of course, I won't have that much time, but you can think of it as you have a pool of servers and instances, and Kubernetes kind of manage and orchestrates those instances for you. Basically, you can deploy what's Docker will call containers, deploy your containers in an infrastructure you can specify which instances are probably more likely to go through your application in terms of this instance as an SSD drive or as a great network connectivity, and so forth. I'm not going into too much detail, but what is the big deal here? So you can use one orchestration framework. That's a huge point right here. You can use locally the same ACPs that you will use when you deploy to production. And if you get nothing else out of that talk, remember this, because I see a lot of people, you know, locally, they will fire off since by hand, they will bring stuff around and then try to kind of like piecemeal their applications local, and then comes, you know, D-Day, the production day, and it's like, you know, sky is falling with this gravity. Wouldn't it be really cool if you can write your deployment recipes along with your code, versions, you know, all those recipes along with it, share it with your teammates, and then, you know, come D-Day, it's exactly the same stuff, except bigger. You're going to have no instances of your database, no instances of your service, but everything is the same, and you can run everything work today locally, and I'll show you how in a second. So you have a built-in DNS, you can name things that make sense, you can link containers that you see, you know, with Docker Compose, you know, if you guys are familiar with that, and you can run your own deployment, you can manage it as code, as a live thing, which I think is awesome. So in essence, you're pretty enabling instance of what you will have, you know, in production come D-Day, right, and you can do that from the get go. You can do your own local cloud cloud management, so they have a really great command line interface that you can use, and you can switch contacts, and then basically be locally here, and manage your AWS cloud or Digital Ocean cloud, with exactly the same commands that you would have used here locally on your own machine. Very cool dashboard showing you what everything is running, what kind of versions are running, and so forth. You can scan your instances, so you can, you know, I think there was a PG-2 talk this morning, and people were asking about what great, you know, I've got several instances of Phoenix, how do I look everything together? I've got this guy here, look at it. Service Discovery API, that's been on topic, so let's look at that too. Are you kidding? Okay, there's much more. I've got a slack control application. I'm not doing it all weekend, thank you. Thank you. So yeah, thanks guys for getting me an HDMI cable. So my name's Aaron Renner, I'm a software engineer at Telnex. We're a VoIP service provider, and we run in a microservices environment, which has allowed us to put about three elixir apps into production. This is just a simple, stripped down version of microservices that we have set up, and the part I'm specifically working on is number porting, so transferring numbers from carrier to carrier. When you get in the microservices environment, there's a lot of interdependencies, services calling other services, so I just wanna talk about how we test the interaction between the number porting service and then the number details service, which says this phone number is based in Orlando, Colorado, Orlando, Florida. I'm from Colorado. Okay, so there's a portability check-in point. You give it a phone number, and then it goes and looks up the location at that external service, checks our database if we have coverage in that location, and then says yes or no on if we have the service, if we can port that number. So we're right at API client that's pretty straightforward to call the external service, and to test it, you use something like bypass to make sure, to fake the server response, make the call, and assert that it's deserialized correctly. But what about with integration tests? Say you're gonna write a controller test, and for this endpoint, and it calls all the way down to the stack. Well, you don't wanna necessarily hitting that, relying on that number detail service to make your test pass. So, in Ruby, what I would do is I go reach for mocks. You can do this, you could do something like this in the like-ser again with bypass, or the mock library, but then everywhere that you have code that touches, or that ends up making this call all the way through, you gotta insert those mock statements, and that's kind of a pain. Another alternative is to build a test version of the number detail service, where you give it a already known number, and it returns a canned response. So I was trying to figure out what to do, and I ran across this quote from Jose, where he said instead of actually mocking your existing adapter, instead create a mock that's a noun, create a mock object that implements that API. So that kind of gave me an idea, okay, let's go in this direction of building a test version of the number detail service. Okay, so now I have two adapters. I have the HTTP adapter that goes and gets the, or makes the request to the actual number detail service, and I have a test adapter. I have some predefined responses, like for this Orlando number, this is the Orlando McDonald's, if anyone's looking for dinner. And then I go ahead and just sort out my default adapter and config.exe, and then my test adapter and test.exe. Then to go ahead and grab the current adapter, you go into the application.getend and call it, that gives you back to the current adapter instance, and you call a lookup on it. Now our integration test is a lot simpler. We can just say here's the Orlando number, and we know what it'll call all the way through to our test adapter, and we're good. We don't have to put any mocking or anything in here. So, but now we have two adapters. We have, and we need to keep those in sync. We don't want things to pass with our test adapter, but when we get to production, things don't work. So a couple ways to keep them in sync. First of all, if you use structs as your return values, then compile those checks. So if you call things, if you call a phone number test and just number in your production HTTP adapter, this will catch those issues. Also, to keep the adapters in sync, you can use behaviors. So each adapter is expected to implement these methods with these return types. This is checked by the compiler, and you can also use dialyzer for even further checking. I also really didn't like spreading around this application.getend throughout my app. I didn't want anyone to have to know that we're switching adapters. So I created a proxy module that basically just passes those calls through to the current adapter. But way down here at the bottom, you can see we just call number details.lookup and proxy is halfway through. Thanks for your time. Here's a link, and I appreciate it. All right, testing, all right, cool. I need to put this type of mic because I have to live code. All right, so this is my first talk. This is my first library. I'm super nervous, but imagining all of you in your underwear is a super bad idea, so I'm not gonna try it. Now, one of the reasons I really like Elixir, like really fell in love with it, is because it's lispy. It allows me to, at some point, write intent more so than I'm writing code, which is really awesome. And so, at some point I was making some web app, and I was trying my darn hardest to get a basic WebSockets API. I just wanted to be able to make a single-page application. It would work in, like, Angular React, and it would just ask the server for stuff, get information back, and communicate back and forth. I didn't need views, I didn't need models, I didn't need a bunch of complicated stuff. And so I know that I could use, like, Phoenix, but, like, ain't nobody got time for that. So, what I wanted to do is get with it and just make WebSockets APIs and worry about what was happening in the server after the fact. So I made this library for myself, and I decided, hey, why not, put this on GitHub and see if people tend to like it. And so what this is, is very simple, it's just important. So I'm gonna kind of demonstrate this for you today and get you set up. Now, I kind of forgot that my windows would move around when I plugged in, but I can't really avoid that, so. So what I'm gonna do is I'm gonna show you how simple it is to get a WebSocket thing working on Elixir, just using a library that relies heavily on macros. So we start in the mixed file, basically we just have two imports, we have Cowboy, because everybody needs Cowboy in the left, and we have Sox, which is my library. Then you got, it's super simple, you just go to main, and you have your standard Cowboy, we're doing things, and this is where you would put dependencies and index and whatever, but right now, we're gonna focus on slash real time, and we're gonna upgrade that to a WebSocket and use Sox.handle. Now in Sox, we have a config, and basically we set an endpoint, which is a module that we are kind of going to, I kind of ripped off the whole endpoint nomenclature from Chris, don't worry about it, and then we have a set protocol. So let's try it out, and hopefully it won't crash, but you know, so I'm gonna run IEX. Okay, so first of all, okay. So what I'm gonna do is I'm going to basically have, I probably should make this better for you guys, what I'm gonna do is I'm going to, this is a clear list, and just basically start a WebSocket connection, and so I'm connected. So now what this is gonna do is it's gonna send that WebCop Sox connection to this module, and what I'm doing is I'm using Sox.Wip, and essentially I have these macros like get, so if I say get, do nothing, let's try that, do nothing, okay, nothing's been accomplished, and basically it's just returning some calculation or doing something, but maybe I wanna set state and the number and add to state, so this will, through the macros, add to state, so let's set thing to pi, okay, done. And so what it did was it took what? It pattern matched and took all of this and considered it into the actual output code, and that's admitted into beam and it's being run, and so it returns okay, done, but then here it updates the state with this thing equals what. So now if I wanna get that thing, I simply get, and then I put a thing, get the thingy, and return that would be thing, so I say get the thing, and that would be pi, and just so that I'm not talking crap, I'm gonna try it with something different, and that would be kick, so it is actively modifying the state, it's merging it automatically, it's checking for all of the inconsistencies and whatever, and it's basically, not only that, but if I don't do a correct command, it actually has global fault tolerance, so it will essentially complain at you. Now in the config, you also can get around that by setting a global fallback, which won't blame you when you f up, so now I'm gonna show you something really cool, which is where I personally like to use this, is you can essentially have a third argument that you return, oh man, really? Would you be agreeable, so that I'm gonna finish? Thanks dude. Thank you.