 So I'm going to talk about building APIs in Pohonix. First, my company, DotGuard, we build Ember applications. And Ember is really the reason that brought us to Phoenix as a back-end technology. We started as a Rails application shop. But as we were building out more and more complex client-side applications and paying for performance optimization over the Rails applications, at least in the initial estimation, we found that these performance issues are really magnified on client-side apps. And so we actually stopped doing back-end development for about a year and a half. And when Phoenix came out, when it got into my radar at least, we pursued it very heavily. And to the point where we, I don't know if it's true, but at least I've heard it's true, that we were one of the first shops to come out and say that we were betting on Phoenix as our back-end technology. And we've continued to invest in that bet. We released a lot of our libraries, a lot of which I'm going to be talking about today. And a couple months ago, we actually ended up hiring Chris to come on board. And we have him working on Phoenix nearly full-time. We look at that as we're protecting our investment so that he can continue making Phoenix the best possible framework possible. This is our little version of our logo, Phoenix-sized or pohonic-sized, however you want to say it. There are stickers of this out in the hall, some over here if you want to grab some. And I think a few people in the audience follow me. So yesterday I went on a little bit of rant on Twitter if you do follow me. Now it's going off on basically page objects in Ember and it continued quite a bit. This is really fueled by the fact that I missed my flight and I was sitting in the airport. But I eventually got to the point where I was really kind of grasping at straws and I started attacking abstractions. And then my last tweet of it was ironically, I'll be going over all the abstractions that we've written at Dockyard. So we'll be getting into that. Apologies for that. So that's the end of my slides. I'm going to be going into code for the rest of it. So let me... So I'm going to be going over a basic API that I ended up building out last night. And it has some common components to it. It has account creation, it has authentication, and it has a query API and some relationship data. And then I'll be going over libraries that we've built in order to make this better. So I'll show the library less version of it first and some of the pains that can come about from that. And then we'll see the libraries that we've written and the impact it has upon the code base. So I should start by saying that the application here is built with JSON API in mind. If you're unfamiliar with JSON API, it is a schema format that's really being championed by the Ember community right now. And I think it's starting to kind of get outside of it. But it came about from two of the creators of Ember, saw that many of these RESTful services all had different schemas that they were emitting. And so if you're consuming all these different schemas, you have to have custom adapters for every single one. Whereas if you're building a single JSON API service, the schema is already predefined. It's a spec, it's a standard, it's 1.0, it's all that good stuff. If you're interested in it, you can go to jasonapi.org to see more about it. But so the API here is written with it in mind and some of the libraries that we've written in order to enable us to write JSON API is for that. However, some of the concepts we'll be going over could be brought into other JSON schema type APIs. JSON API is a weird name to say because it's the name of the library, but it's also, you know, it's a concept in general. So anyway, we'll start with, you will start with right here in the router. And the library that we use to build out our JSON APIs is called Joss Serializer. This is not a Dockyard library. This is a library written by Alan Peabody at Gillian. They're up in Vermont. And it's a really nice serialization library. At the time when we first started building JSON API backends with Phoenix is the only one available. I think if you go on Hex, there are a few more now. But what I like about Joss Serializer is that it's trying to cover a bunch of edge cases. If you check out the JSON API spec, it's a very verbose spec. And so you plug in Joss Serializer into your API pipeline. Content type negotiation will actually detect that the accepts header and the content type header are correct for JSON API. And then the Joss Serializer, sorry, the de-serializer will actually take the inbound request parameters and do certain things in order to make them easier to use in Elixir. So all key types in JSON API are hashed. They have hyphenated. And Joss Serializer will underscore them and that type of thing. We then pipe through the API pipeline into our API scope. And the next thing that you have to do is, and all of this is covered in the Joss Serializer readme, I won't spend too much time on it, is you have to configure a plug. So I don't know if you can see the text down there, but you have to configure it to handle the JSON API format type. And then you have to go recompile your plug. And all that's covered in the Joss Serializer readme. Next, when we are creating a user, we try to, we want to keep as much code out of our controller as possible, at least to a certain degree. And when we're doing certain things like encrypting passwords, a really good place to encapsulate and handle this is in the change set. So we can see here that our creation function is only really taking us down two paths with that case statement. It's saying it's either the happy path, meaning that our data insertion happened properly and we render out the happy result or the sad path, meaning that the change set failed for whatever reason and we're rendering out, in this case, a 422. Inside the change set, oops, we handle our validations, but the nice thing is that after we're done with our validations, we can encrypt the password. So on the actual model here, we have a password hash field, but we also have the password and password confirmation fields. Now, in order, if, I know that there are some newer authentication libraries that have come about, but we've continued to build our own custom authentication internally, at least to the point where I feel that those other libraries are providing enough value for us not to do this anymore. The reason why we build out our custom authentication is because it's so simple in Elixir. It's actually really nice, there's a library called comeonin, and we use bcrypt, and in order to create our password hash, we just pass it through the hash password salt function, and we pass it our password. But here, we only are encrypting if our change set's valid. So now that we have this, our user's actually created. Now, if you want to authenticate our session, we have a, whoops, that's the wrong controller. We have a session controller, and again, we're handling the happy path and the sad path. Now, what we've done is that we extract out all the authentication, all the authentication code into a strategy called authentication, and that we just put into the strategies directory. So, if you're coming from other languages like Ruby, you can just put pretty much anything into any directories in Phoenix. There's no requirement of saying that your class name has to map back to a given directory structure at all, because it's not doing something like load this file and then guessing the location of the file based upon the actual class name. Everything gets compiled when you're running the server, and so it's just available at runtime. So our strategies here for authentication end up being pretty simple. So, this here allows us to build out authentication with different types. If we want to authenticate against an incoming, like sent up email, and say we have a token that's associated with that, we can just use Phoenix's pattern matching, sorry, Elixir's pattern matching right up in the function definition. But here, what I like to do is we will grab the ID of the account, but we also create a polymorphic authentication strategy. So we grab the type of the account, we do that by grabbing the struct off of the account model, and we sign this into the session. The reason we do this is because we may have different models that are handling different types of authentication. If we have a regular user model, if we have a client model, if we have an admin model, we want to be able to differentiate between these models. We don't want to just say we're going to rely upon the account ID itself. Next would be, if we go back to the router for a second, and we look down here, we can see that we have a pipeline for authorization. So in this case, we have a post that we want to create, but we only want to do it if we have the rights to do that, meaning that if we're signed in as a user. In this case, we've limited that to the create, update, and delete functions, sorry, actions. So if we take a look at, sorry, the pipeline right here, we're using the authorization strategy, and we're just, now this is actually role-based authorization. You can pull this out of, pretty much, you can pull this out of this repo. This repo is going to be available for everyone to check out if they're interested in authentication. And a pipeline for, what ends up happening is that I don't have to build this really ugly nested if statement with many different conditionals. I can create something fairly flat by relying, again, upon pattern matching, which ends up being really, really nice to handle. It will go through here, and then probably check to see that I'm authenticated. If I'm not, it's going to return an unauthenticated session and just halt the actual request. So a lot of this stuff is fairly simple as is, and I'm sure that those that are in the audience that have already built out Phoenix applications probably are looking at this and say, ah, we've done this already. This isn't too interesting. But where it becomes a little bit more interesting is when we get into posts. So one pattern that I really like that we've been using, I've used now on two applications, is with composable queries. So in this case, we have somebody that is hitting the index action and just saying, I want all the queries, or I'm going to provide some query params and say, I want all the posts for a given user ID. And with Ecto, because we can build up this query object over time and with recursion in the actual leveraging recursion, we can iterate through all the query params and actually build up something powerful very easily. That ends up being, this very simple version of it ends up being, what's this right here, less than 10 lines. So on line eight, we hit this build query function, passes in the query params on line 17. We actually convert that map into a list, so now we can iterate over it properly. We call again the build query function. This time on line 16, we see we're guarding against this map. Next, it goes into line 21, and we pass in Q. The first sign goes in, it's going to be the post-model, and here it just iterates over the key values. This very simple version of it is going to say, okay, for each key is equal to this value, we're just going to add this statement onto the query. And then we continue to iterate until we're actually done. And then it goes back up to line 10, and because of, I'll be showing in a minute, the nasty nature of testing out JSON API responses. We ended up putting this order by just to normalize the test a little bit until I show you the other libraries. Then we just call repo.all. And this feels great. This feels like a really, really nice pattern to actually build off of. And it felt so nice that I extracted it out into a library called Inquisitor. Now Inquisitor, you pass it the model that you're going to act upon, and it will give you, during compilation phase, it'll write into your module a function with that model name in mind. And then you just call build post query and pass it to params. Now in this case, Inquisitor will give you the default function of mapping keys to values, but there are more complex situations. You don't want to just do key values. Like maybe you want to add a limit that could be available to someone on the query params, which case they can just do limit equals 5 or limit equals 10. Perhaps you want to actually, what we're doing here on line 16 is querying against the publish date of the post. So we're guarding on the values a month and year for the keys, and we're building out a date fragment in SQL. And we just add this onto the query. So what this would look like in use is... that's probably really small, right? This may not look fantastic at big sizes. I probably have to run my server. So with this very simple query statement, we're getting back everything that we need There's only three pieces of data in this database. We just asked for all the posts in the year 2016, January. But we can actually compose more onto this. So if I want... I have a limit down here on line 22. I can add this, and now we only get one response back. So with the Luxor's pattern matching and leveraging recursion, Inquisitor allows you to build out a fairly complex query system very, very quickly. All right, so let's move on to some testing and how we're doing testing. So let's check out what testing JSON8 PI responses may look like if you're not using some of the libraries. So start with... we'll just go to the nastiest one, which is going to be posts. So not only does JSON8 PI have a verbose response schema, it also has a verbose request schema. So in order to send data to JSON8 PI, to a JSON8 PI endpoint, you have to observe their schema. And this ends up being kind of a pain in the butt, right? So the object is a data object, and then nested within there is a type, the type name of the object, the attributes within there. There may be an ID in there as well. And doing this over and over and over and over again can become pretty monotonous. But what's even worse is now asserting the expected payload. So if we have relationship data that's coming back, we have the original primary object, then we have the relationships that are embedded within that object. There may be that we are including the full object, not just its metadata, and that would be in a separate object called included. Clearly this is not a fantastic way to write it, and you can even tell as we go further down that my editor starts throwing a fit because it can't even syntax highlight it anymore. And it's all valid syntax, but it's just becoming so verbose, and right now I think it's all blue. It's completely gone, you know, bluey on me. So doing this is a huge, I don't want to say waste of time. I shouldn't say waste of time, testing is fantastic and you should be doing it, but it's not really helping you if you're just killing yourself doing this. So one of our engineers, Dan McClain, he wrote a library a few months ago called Voorhees. And if you're familiar with the Jason Voorhees, the master deal people, Voorhees, Jason, that's where it comes from. So yeah, I know. Exploring alternative names. But the first pass at that was just some, I think a lot of this shows just how we're evolving and learning how best to write libraries in Elixir. A lot of us are coming from different ecosystems and it takes a little bit of time to change your mindset over from how do we go from, you know, best practices in this other library, in this other framework, in this other language over to Elixir in Phoenix. So the first pass of Voorhees, I should say the current pass, because what I'm going to show you is very experimental, is just you pass in kind of just some expected attributes and it's a little bit nicer API for doing something so verbose as this. But the more I've been writing Elixir, the more I've been focusing on it, the more I want to do right here, what's on line 13, I want piping, I want to be able to just say, I want composability, I want to have something that's, you know, very simple but also very powerful and I just want to be able to pass things through without any issue. So the updated version of this is now this. So rather than having those huge JSON objects that we're building up in our tests and then doing actual equality assertion, the nicer API for it. So let me, like this, assert data and assert relationship functions. And what this does is you pass in your model and it's taking this, that's actually not correct, this should be, it's a science of the con variable but actually it should be a payload because JSON response on line 24 returns the body of the response. But it takes the body of the response and it's going to iterate through the data object to find if there's a matching data object with the model that you're passing in. In this case, it makes some assumptions for you. So it's going to say, hey, we're going to try to understand what this model's primary key type is and then we're going to find the value for that to see if that exists. And then we're also going to try to find if the type itself exists. We may pull it off the struct, we may ask Ecto for some help on that. It depends upon the situation what you're passing in. And then it will force the data segment of the response to a list and iterate through that and try to find if a corresponding data object in there with the same ID and type exists. At that point, it will iterate through all the attributes inside that data object and make sure that all the corresponding fields on the original model passed in have the same values. If that's the case, then yes, it's happy. It asserts that. With relationship, it will go through and it will find that same data object and then it will check to see if the relationship metadata for, in this case, for users one exists. So here we would expect a JSON API response for the post endpoint and we want to make sure that it's also returning which users associated with this post. So this is a huge improvement over a segment of the response and ends up being a really, really nice API to work with. So in addition to that, I'm not sure if you saw on the previous, actually I'll bring it up again. So dealing with authentication and authorization during your test suite can be a little bit hairy as well. So here we have line 31. Posting to the session path or creating a session. Then we have to recycle the connection object and then the connection object. I don't know if this is a bug or not, but it loses the content type after the post session. So I have to put in the request header again here on the content type. So we have another library that we've written right here called AuthTestSupport. So AuthTestSupport will give you some basic authentication authorization test support helpers to ease the pain of having to do a lot of this boilerplate over and over and over again. So the first of which right here, line 22 is AuthorizeAs and it will just authorize you as a given user. In which case here, it will simply just do the assigns object on the session. So what our authentication or our authorization strategies actually have within them is that it will first try to return the assigns object. If the assigns objects exist, that's the account that you're authenticated at. Otherwise it falls back to the account ID or the account type and tries to authenticate against that. In addition, we also get this nice macro called require authorization. Require authorization will expand out to a huge test and what this will, what this saves you is the boilerplate of having to test against 401s for pretty much all the actions that you don't want to allow access to. If you're not authorized as any user or if you're authorized as a user that shouldn't have authorization to that given route. So by default, you can just do this here and it will go against all the regular restful actions. You can send it the only, you can do accept, but you can also do different roles. So let's say you had an action that only admins should be able to access. So you want to test that against not being authenticated in any state and you want to test that against being authenticated as a regular user. So in this case, you would do something like no off and then off. And this keyword list here, anything on the left-hand side of the keyword list that are just atoms, it's just for documentation purposes. There is no magic about no off. I just happen to use no off. There's no magic about off either. But the value that I'm going to assign to the off key would be something like a, I'll just call it off in this case. And then when it's actually building out this test, when it's compiling the macro, it will take the connection object and it will send it to this function. And so you can do whatever you want with the connection object at that point prior to hitting the API endpoints that it's going to test against. So we may have something like this, just con and then do and it would do like off regular user con. Something like that. And it returns the connection object. So this function clearly doesn't exist but I think you get the point. And the other thing that we've done too, we're being mindful of compilation time and speed. It does not pump out 20 different tests. It pumps out one test. You may not be aware, but when you're building macros that are then emitting macros, it's actually slow to do that. So it will have one kind of monolithic test that will iterate through all this stuff for you. Where we get away with it is actually in extunits assertion messaging. So if something fails for the update action, if it doesn't exist or you do, it doesn't return the unauthorized status code, then it will actually, it will tell you properly in your test suite. So that's required authorization. And I want to go down to... I don't know if you remember from a few minutes ago but the actual querying tests were huge. There were like 40, 50, some cases close to 100 lines of code and we've cleaned that up into probably less than 10 lines of code here. What's really nice is that if we're querying against sets that we want to make sure we're not returning certain data, we also get refute functions as well in addition to Voorhees assert functions. So here we are querying against the user ID and we want to make sure that we're only getting posts back that belong to user ID one and so we want to make sure that we are refuting that data of post two because that's associated with user ID two that exists in that set. On the model side, let's show this one more complex, we've written a library called valid field and valid field will allow you to validate your change sets. Your change sets. So we take the original model struct and we pass it through this with change set function. This will make an assumption that if you don't pass it any arguments that the regular model.change set function is the one you're using for your change sets but you can customize it and pass in a reference to whatever change set function that you need and the idea here is that this style actually works really well for unit testing your validations. So in this case, let's say because we don't actually care about which validation is satisfying these conditions and you shouldn't care about which validation is satisfying these conditions. What we care about is the behavior that's being driven by the change set. So something like a pattern matching and a exclusion and inclusion validations. These may satisfy the same condition. It's up to you which one you want to use. So on line 11, email, we care that testedexample.com is a valid state and nil, empty string and foobar are invalid states. We've done an application where we weren't allowed to accept users that had military email addresses. So in that case, we would just add a military based email address to the right hand side and we just run our test suite that fails. Now we just add a format validator to actually ensure that we're not accepting anything that with a GovID or MIL, sorry, MIL sent either prefects or postfects on the email address. In addition, there may be situations where you're trying to test against data that needs to be set up. So on line 18, there's a put parameters function that will allow us to actually inject some data into the change set so we can test something like password confirmation. Where this pattern doesn't work and I haven't come up with a good solution is against database constraints. So in Ecto, things like uniqueness and other constraints can be mapped back to validation constraint message errors. So you can capture those messages and emit them properly and handle them properly. However, what will happen in your database is that you can't target which constraint you want to fill in which order, at least in Postgres. I don't know if that is something that happens in other databases, but let's say you have a list of three different database constraints that you're checking against. And you want to assert that the uniqueness validator is going to work. So from a conceptual point of view, it's easy to set up, right? So we just created an existing model in the database with the same email address and then we just run it through here and you should capture it. But in reality, what ends up happening is that if the uniqueness constraint is the first one that the database fires, then it will hit another constraint and just completely blow up. We could do something a little bit more hairy and actually set up all of the other values that are to be expected to satisfy the other constraints. But it's not super clean solution and I like clean solutions. So I'd be interested in knowing if anybody else has some ideas around that. Finally, there's another library that I've written called EctoFictures and I know that there's a few other test database data harness libraries that are out there. I've been using EctoFictures for, I've been writing it for the past couple months and it does some things well, it does some other things not well. Specifically, its current API for generating data is not fantastic, but I consider that a nuance to be solved. The parts that I think it does really, really well is handling complex data sets. So whereas other libraries may insert data one at a time as it's actually being accessed, EctoFictures will collect all the data that you're looking to insert and then it will actually analyze it to determine if there are any sorting that needs to happen. So for example, if you're using relationship constraints in your migrations, you may have foreign key constraints in the database. So in order to manage that with the libraries you may actually have to make sure that you're observing the race conditions there, saying I need to make sure that the parent is inserted before the child. EctoFictures doesn't care, it actually sets up a directed x-lick-lick graph internally and then it will analyze the relationships between all these different data points and then say okay, we're going to make sure that all children are always inserted after their parents. Makes that pain much, much easier. In addition, we're leveraging, currently in this version, tags, test tags, so we can just do at tag fixtures, name the fixture files we want to pull away from, and then on the context over here, it's always injected as a data key on the context. In elixir 1.3, I'm not sure when that's due out, but I actually made some commits to x-unit in order to allow module attributes in x-unit to be collected in addition to just tag. So, if you're unfamiliar with module attributes, I mean, you've seen them right there, like at tag is a module attribute, but if you're unfamiliar with how module attributes actually, how the tag module attribute actually works within elixir, because of during compilation phase, the test macro, whenever it's done, whenever it collects it, it will delete all tag module attributes that currently exist. So, that's why you can have this tag be completely separate from this tag, but in addition to that, you can accumulate module attributes. So, you can do tag, foobar, and here, and then within the actual setup, or in the context object, you'll be able to grab that data and it's all accumulated properly. So, what will happen in elixir 1.3, and hopefully other libraries will use it too, but we're going to have a fixtures module attribute and then we'll be able to do this and we'll be able to continue to accumulate that, so if you want to do fixtures, dogs, right? And then we can actually pass options off of that in order to customize what the data set is. We can't currently do that here, if we did tag fixtures dogs, this will blow away the previous value of fixtures. We can only insert new keys. So, that's that's going to be coming, actually I don't know when it's coming, but hopefully soon because it'd be pretty cool. So, how much time do I have left? Okay, so these are some of the libraries that we've written at Dockyard that I've actually been pulling out and extracting like a madman over the past week to get them ready. They're very early in development, but I think that they're good enough to start at least playing with. In fact, some of them only exist in PRs right now. The assert relationship, all the assert data stuff on Voorhees exists within a pull request. The tag module attribute stuff exists within pull request. But we're kind of in this early stage right now for Phoenix development. I hear a lot of people that are always interested in writing Phoenix applications, but they're coming from other frameworks and places where it's safe for them to be, right? They have this huge ecosystem of tools that exist. The lessons that we're learning at Dockyard we want to share with people. I think that in the only way that we can really make Phoenix successful is if everyone's doing this, if everyone's learning, everyone's coming back and trying to build the best tools possible, but also we should not be trying to always go back and reinvent what exists in the other languages. One of the lessons that I've taken away from doing a lot of this work is that composability is king. Composability is really one of the key features that I really like using in the libraries that we're building out. However, you should not build a need to force composability. When you're building out libraries you're building out extractions you should keep composability in mind for what that API may look like. But if you feel like you're just really kind of forcing it down the composability throat then it's perhaps not the right thing to do. So I'll end with this here. Links to everything. The repo that I was showing off, if you want to go through it dissect it, pull stuff out that you want to use, it's at my personal github repo bcardarolla it's right on there. And then the other libraries here, Jesse Realizer, Inquisitor, EctoFictures, AuthTestSupport, ValidField, Voorhees, and then come on in for the encryption. These, with the exception of Gillian and Voorhees, they're all available in the Dockyard organization. So that's all I got. Thank you guys very much.