 in Elixir and first of all I have to apologize to everyone in the audience for not being there. Unfortunately this happened in Boston and my flight was right here in the middle of it but I as somebody who runs conferences myself I understand the the frustration and the anxiety that goes along with speakers being late and potentially missing the conference so special apology to Jim I really appreciate the effort that he's gone through to to allow me to still speak despite not being on-site. However despite all that it seems like what I've seen on Twitter and just from talking with people that the conference is going really well and you can tell that it's a really good conference because it already has its own meme which is just fantastic so from working with Chris day-to-day I can tell you that this is definitely more fact than fiction. All right this the video streaming I'm not sure which if you see both me and the slide deck but if you happen to see me and the audio for sure is being run through an application called Meet Space. A friend of mine runs this app his name is Nick Gauthier and he was nice enough to give me a free account to facilitate me speaking on this so I just like to kind of pimp his company for a little bit. It's a Google Hangouts competitor but it uses WebRTC, higher quality audio connections, higher quality video connections because it's P2P rather than going through a centralized server. I should check it out and then I work at Dockyard as I'm sure today people are sick of hearing about Dockyard already but I promise you on the last Dockyard speaker. Okay so I want to talk about first a problem that I see growing in the electric community. I have to be very careful about how I articulate this because I don't want to have it come off as me not wanting to have new people in the community or trying to turn away people that have already come to the community from other languages. However there is an acute kind of misunderstanding on what Elixir and its potential is. I hear a lot of people saying and I'm guilty of this as much as anybody. I've said some of these things myself but as you begin to learn Elixir and Erlang and you know the platform itself, you begin to understand that there are some major differences between itself and the languages that you may be coming from. So we're led to believe that this is kind of the evolution of a man to the computer and then it long came Ruby and Rails and then Phoenix and Elixir are the evolution of Ruby and Rails. I want to say that no this is not the case. We have to really stop kind of having this conversation with ourselves and with people outside the community to try to draw them in. I know that Ruby and Rails are a good conversation starter for saying hey you know there's this new thing that's faster and it's new because that's always fun to play with as a software engineer but I think that it's doing ourselves a disservice because what I've seen over the past year or two is that some people transitioning from other languages are kind of just stopping their journey of becoming an electric developer at the speed. They are recreating the same ideas, the same kind of libraries that they did in other languages and they get the speed like oh this is awesome that's it I'm done. I want to hopefully dissuade everyone in the audience from that point of view and do so by talking about the libraries that I've been working on over the past two years. So this is the Venn diagram. So this is the Venn diagram of where I see the animation, sweet spot of where elixir is. It's in between composability, distributed systems and fault tolerance. So the composability allows us to build out APIs that feel really good to actually write on a day-to-day basis. The distributed nature of it allows us to build out really large systems that can deal with many inbound connections and if we're talking about like Phoenix and respond very quickly, fault tolerance allows us to build out these highly robust systems that were otherwise either too difficult for some people to build or very complex and costly for some companies to consider building quote-unquote the correct way. However, I don't want to make it seem like these are the only three things that elixir gives us and I'm gonna touch on some of these some other things in this talk. So compile time, umbrella, scalability, macros. I should have included them in a larger Venn diagram but otherwise I would end up with something that looks like a spotted horse and that's probably not gonna look too good for anybody. Okay, so a couple of concepts, querying APIs, composing and sending of mail, smart fixture data, testing payload responses and parallel acceptance tests. These are the core things and solutions that I've been working on and iterating towards and I don't want to make it seem like I've solved these problems at least what I consider to be the good elixir way but I think I'm moving in the correct direction and I'm definitely gonna be asking for if these libraries seem of interest to people for some feedback and some help on on building them out. The first of which is probably the lowest hanging fruit and I'm willing to bet a lot of people have created this and stumbled upon this pattern within their own applications. It's a pretty simple pattern. So we may query our API with these key value pairs in the query params and then we end up building out something like this, right? We take advantage of the pattern matching system that exists and recursion and we can build out a nice query builder very, very quickly through Ecto and through Elixir. So just to break it down what's happening here in the first function, the index function, we have a, we have our params, we passed it into the build query. The result of that we passed into reba all and then we render out the response. The build query, the params of course come in as a map and so it goes down to the second function. It detects if it's a map. Yes. Okay, we enter into this function and we convert the map to a list and we recurs into build query. Now it goes down to the third function and this is a single argument version of it when it's a list and all it does is it will add the model on as the first argument and then it recurses again. Now it goes down to the last two functions and this is the traditional iterate over the list pattern that we all know and love. And all it's simply doing here is building out the query for key value pair. So if foo equals bar, if bad equals cox, etc, etc. And then once the list is empty, it goes back up the chain, returns the composed query to reba dot all and we get what we want. So the nice thing about this is that we can have the default be a general catch all for a key value pair. But if we wanted to match and have a different query run, or rather build out a different rule for a given attribute, we can use the pattern matching to do that. So we could have defined this function right above our catch all. And it will simply detect if the attribute value is updated at and yes, timestamp value. So we are building out a part of the query that is determining whether or not this post is, it's updated at timestamp is greater than or equal to the date value that we're passing in. This is such a common pattern that I extracted it out to a library called inquisitor. And it's pretty simple to bring in. So all you do is inside your controller or wherever else you want to use it. You just use inquisitor and you pass in the model that you want to use for it. It is going to build out a function based upon the name of the model, actually, I guess their schema, what are we calling them now, I don't know what we're calling them, model schemas, whatever they are in the future. It's going to build out a name version that function off of that off of that. So in this case, we have build post query. And that other default stuff you get for free. So it just put your macros into the hood. That's injecting this code into the the controller. Now there's some nice security features that are baked into it. You probably do not want to allow just a, you know, run against any field in your model flash schema makes everything public. So there's a whitelist option. And these are opt in. So anything that is not included in this list will be disallowed to query gaps. They just ignore it. So it's not like if somebody includes a key value pair that is not allowed in the whitelist that they get some sort of error, it's just tossed out. We can handle I hope people can see this because it's a little bit wide. I had to make it smaller. So we can include virtual attributes. So by that, I don't mean the echo virtual attributes that just mean the sense that we have keys that do not map directly back to a field on our model. And in this case, we may be talking about trying to find posts that are within a given month in a given year, right? So we may have a timestamp field called published at and this contains all that information, but we want to query against just two segments of it. So here we have a URL that we're requesting month, year, and then it will come in and actually run this this fragment function, which is part of Acto that allow us to run functions within postgres or whatever database you're using. In this case, the date part that you see there is a postgres function. So this will simply just get the part of the date that we're requesting, whether it's month or year, determine the numeric value of it, and it will match against that and build up the query like that. So some other things that we get in Inquisitor are Booleans being typecast, all your prams coming in are the values are strings, so it will determine whether or not that string value is a string of true or the string of false. If it is, it will typecast it to Booleans automatically, and then we get a limit handler out of the box. So you can do something like this. Now, the really nice thing about writing this in Elixir is that the entire implementation in this library, this is it right here. I stripped out the comments and the documentation, but this only only ended up being 56 lines of code, which is phenomenal. If I had to build this in another language, it would most definitely be significantly longer. And I am willing, I didn't really code golf this too much, so I'm willing to bet that someone a lot smarter than me can come through and really shrink this down more. If I care to, I'm not really, not really too interested in doing that because it meets my needs. So that, you know, to me shows me some of the power in the allure of Elixir. All right, so the next one. I have to confess that this is not a direct quote. The reason for that is because I don't know when it was, maybe six months ago, Jose committed online Twitter side, whatever we want to call it. He is still there, but he just deleted his history and is now retweeting, I think just the Elixir account or maybe platform and tech as well. In any event, I can't go back and find the original tweet. But it was something like this, right? Like he was looking for a library to compose email messages in the same way that you do with plug or any other type of pipe based library. So I was working on a an email system at the time. And I kind of had something similar. I decided to extract that out. And I decided to take it a step further. So if you're unfamiliar with RFC 2822, I am way too familiar with it now. It is the RFC for internet message formatting. It's a pretty big specification. And it just details how you know what the spec is for for internet email. Pretty much single part, multi part attachments, encodings, everything that you would want to know about email is in this spec. So I wrote a I wrote a library called mail. Now there's a there's a kind of interesting story or funny story to go along with this. I created a repo on GitHub called Elixir Mail. And I just put in the read me that this the goal of project was to be RFC 2822 compliance. Somebody must have been following me on GitHub. And right away, they posted it to hacker news. And so within two days, there's nothing in the repo. There was probably just like a few lines of code. Nothing was really done yet. But this hit number one on hacker news without any anything to really show for it. So if we were concerned about the Elixir hype cycle, I think that this may may be a good indicator of it. And we're definitely heading up. I don't think we're at the top yet. I don't think we're, you know, I don't think we're, if we still have a little bit of ways to go. Maybe I can move that that graphic down a little bit. But a completely empty repo, hitting number one on hacker news seems to me a bit funny. Anyway, okay, so some of the goals of mail. Actually, here are the three goals. Message composing, rendering and parsing only. That's all that mail does. Does not handle sending and receiving a messages. I'm leaving that up to other libraries. I have a library I'll be talking about in a few minutes that uses mail but mail can be used by anybody. And that's really, really what I'm hoping for that becomes this primitive that other people are building messaging platforms off of. And here's the API. So if we wanted to compose a single part message, we can build a new, a new mail message. And then we put text into it, we put the to header field with the from with the subject. And it builds out this, this map, essentially, that can be rendered out into a RFC 28 22 compliant message. If you want to do a multi part message, we can do this right here, we can add text, we can add HTML. Now we can put attachments, a number of attachments can either be a path, a relative path to the file, or we can add a couple with the name of the file and then the, the data of the file. And mail will be smart enough about trying to determine the MIME type of the of the file, sorry, of the attachments. And once we have the message, we can render it out with one of the included renders. Right now, there is only an RFC 28 22 renderer. I decided to name space it in the unlikely event. There is another specification. I don't know about that's out there. I didn't want to fuck on myself. And something that I think a lot of people have been using it for quite a bit is the message parsing. So it will take a second client email message, and it will parse it back into a Elixir map, whether this is single part or multi part. Has all encodings in there basically for a bit quoted printable seven bits. There are probably a lot of edge cases because this is an old specification. And I even though the library itself has been around for a year, I'm not working on it for a year, I work on it on and off as I need to be. And I receive a few PRs every now and then. So I'll have to say that I need some help with the library. I'd really like it if this became kind of like the core library that the community used. And there's a lot of a lot of love poured into it to make it as robust as possible. I think the number one thing that a library really needs is expect test suite, something that really allows us to determine where all those edge cases are. And of course that determines vetting out the full RFC 2022. Okay, so that's mail. One thing then I've I've seen with some of the mail sending libraries that are out there is that and this goes back to my original premise run along how we're kind of taking existing things we've done in other languages and applying them to Elixir is that we see like adapter based libraries. And I think that's fantastic because that's actually where mine landed. But I feel like they a lot of these libraries kind of stop short in the sense that we have to really consider some with something like Elixir online. We don't consider, you know, a few people using our application, we have to consider, okay, how do this application operate? How does this library operate under millions of users? How does it operate under distributed system? How does it operate under? How do we make sure that the messages that are going out guarantee delivery? So I said it with this kind of base idea in mind, right? So if we think about not how we're going to send 100 messages or 1000 messages, how are we going to send a billion emails to this system? Something that really competes with and guarantee a high percentage of those emails being received or in the unlikely in the somewhat likely event that the message is not received, how do we handle it properly? So kind of a little animation here to describe what I want to talk about here is his little guy, he is sitting at his computer and he has an email he's going to send. So he sends that email. However, there are a whole bunch of other emails waiting to go out. So how do we ensure that the outgoing, that the message server we're connected to, we're not going over some sort of rate limit, right? How do we ensure that the message server that we're connected to isn't having some sort of downtime? And how do we make sure that we don't lose that message that's going out? Or how do we make sure that the message is going out isn't malformed in some way causing some sort of error and being notified or being able to handle that properly? So with Courier, we hopefully are going to find ourselves in a place where we can handle all those edge cases and make sure that on the other side the recipient receives their letter, their email rather, or we can find out what went wrong and handle that properly or put it in some sort of queue to be examined later. So Courier, still a very early version of it. And here's the API for mail. So to send a message through Courier, the first one, two, three, four, five lines of that are mail. The last line is Courier. So we just filled out a message, the mail, and then we deliver it. To bring Courier into your application, you're going to find a mailer. So this is going to use Courier and then you want to bring in your OTP app. So it likely won't be called my app in your case. You're going to add the Courier, your mailer rather, to the supervisor tree. The reason for this is because Under the Hood Courier actually has a pooler and a scheduler that is watching for new messages that are scheduled to be sent out. And then it will grab a pool limit on those messages and iterate and try to send out every single one that it can. And finally, we configure the mailer. In this case, we're using SMTP adapter and we set some of our configuration options for that there. Courier comes with some adapter types out of the box, SMTP, logger, test, web adapter, which is actually a separate library, Courier underscore web. And this will have a web interface for interacting with your messages. So it's good for development. Or you can build your own and build your own custom adapter is super simple. I try to make it as that simple as possible. It takes two functions, start link. And in most cases, you're probably going to just return ignore out of this. And then the delivery function. So delivery, sorry, deliver function takes the message and it just takes the options. The options will be comprised of maybe the configuration options that you set up or some scheduling options. And then you can define your custom mailer delivery function right there. You can schedule deliveries for the future. So if you include the act and then a daytime and currently it's daytime is not working off of the elixir calendar stuff. It is a like the airline tuple. So the year, month, day, comma hour, minute, second, I think. I'd really like it if someone could be our support for calendar. That'd be pretty nice. You can configure for rate limiting and delivery frequencies. So if you wanted to attempt to send more than 15 messages at once, if you wanted your schedule to wake up every 100 milliseconds to check for new messages, whoops, to be delivered. You can render from within Phoenix. So courier.render will take any Phoenix view. This my app.mailer view has nothing to do with courier. It's just the name of the module. It's a regular Phoenix view. It will actually determine the type of part. So in this case, on the first one, because it's determining that the my type of the file is text, it will do a put text into the message. And the second one, it will determine that the my type is html. And so it will do a put html into the message. And then the last section there is the data that we're passing off to the view. Then we deliver it. Another nice thing with elixir is that testing the message delivery ended up being really simple to implement. So here we just add the test adapter. I like to set the delivery timeout for something super high so I can go in and inspect if I need to. And then the delivery interval is zero. We don't need to have anything higher than that right now for our test-free deletes. There's no reason to block on anything. And then the implementation test. This is pulled directly from the docyard.com. So this is our API backend. And this is when somebody contacts us. So if you look at line, the second line within the test, we have a cert receive. And then we're looking for the delivered message. And then the value being passed back is the message itself. And we will timeout after 500 milliseconds. So if the message is not received within 500 milliseconds, basically an error is raised in our test scales. Otherwise we continue on and we get back the original message that was built out with mail. So this ends up being a really nice kind of end-to-end testing solution for for message delivery. And again I need help. I would love for community involvement in this one. Hardening of deliveries. I've gone through and I think that it's pretty hard right now. But I'd like for some other eyeballs on it and perhaps even some discussion around whether or not my implementation is correct. One thing I'd really like to do is distribution across adapters. So let's imagine that you have AWS to send your email, right? And there's some sort of rate limit that's attached to your account. You can open on multiple accounts and then you can have multiple adapters attached to each different account. And you can have a scheduler that does some sort of round robin delivery across the different adapter types. This would be ideal. And then at that point we have something that's like super scalable, something that handles like if they meant sorry I think I forgot to mention but if the message fails there's a handler for that. One thing I've been thinking about is you know how much farther away is this system from something that a MailChimp might give you. Now there's a whole aspect of like MailChimp has a UI and you know there's white listing involved but one of the from a marketing perspective one of the big things that MailChimp does is that it handles a message delivered guarantee and it will make sure that your mailing list is going out to the widest audience possible. And these type of services are expensive. So I see you know as we evolve our tools and new things become available like courier there may be opportunity to replace out and reduce the cost across organizations with the properly built out libraries. Right now we're going to talk about data but not this type of data. This type of data. Boring data. You know we start our applications with these really simple data models right. This one relates to this one that one relates to this one. And when we are uh stubbing on and in creating factories or fixtures for our data in our test suite the simple modeling of our data fits really well with certain solutions. However it's not too long until the maturity of the application brings you to something not much on like this. It looks like a mess but uh this is in somebody's head somewhere. So you know complex data relationships and uh especially when we start getting to Fonteca strands and like how did having certain data available before other data is is creating a database ends up being a tricky problem to solve when it comes to fixture data. However some of the ways that we can write elixir libraries I hope to demonstrate bringing some nice solutions to this problem. So something I've been working on again for a while is ecto fixtures and it's the implementation I'm going to talk about right now is currently on our branch. So the first thing you want to do is add an ecto fixture compiler because it's working off of not elixir files. The fixtures themselves are not elixir files so we have a custom compiler that will actually detect any file changes within the fixtures themselves and will attempt to uh if the file changes, delete it or the new one it will make sure everything recompiles properly. And then the fixture file I get the syntax somewhat elixir like but it's not clearly not elixir right. So on the first line we're defining the model associated with this fixture we're defining the repo associated with this fixture and then each grouping there is a named grouping so post one is the name of the group or name of the record rather and then we have the key value pairs within there. Sorry that's just should say use fixtures at the top. Then the API for actually using the fixture within your test file is we just call fixtures and then pass an array of fixtures that we want to want to bring in. These are then these are grouped off of the uh the compile fixture data inserted into the database and then made available to the data key on the second argument to your test. Now the managing like gathering the data composing it and inserting the database for just single records is actually a fairly fairly trivial process. The more complex that is when we start getting into all these different relationship types but how do we guarantee that if a especially because active migrations when we're talking about relationships between records it will set up a foreign key constraint. So the odds are that your data if you're trying to work off a child record it's not going to allow the assertion of that child record until its parent is available in the database. So if we go back to this file that we added in to to bring in our active fixtures the nice thing here is that uh use will allow us to do some compile time uh I don't want to use the word magic things I'll say things instead of magic it'll do compile time things and uh the thing specifically that it does here is that it will uh at compile time read all the fixture files build a giant uh map of all that fixture data and then it will actually sort all the data based upon the relationships that define within the associated model. So it builds out a DAG if you're unfamiliar with the DAG it's a directed acyclic graph and it will it will set up all of the edges between the data points properly. Then um at that point we have a DAG we have all the data when we request we go back to um this one right here when we request post one and post two it will attempt to get post one post two out of that uh out of that giant map it will also determine what other records are associated with post one and post two and take those out as well so whether they're child or parent records then when we attempt to insert the records in the database it does a topological search uh sort rather on all the records so that we're making sure that um always the uh the parent most records are being inserted prior to child records and this is working uh in actual fixtures for one-to-one want to many make many and through records. So um this this is a beginner level talk so uh if if you're new to programming or new to elixir and new to compile languages you may be thinking you know what is compile time how you know what is this what does it do for me. So a quick primer um there are three um I'd say life cycles within your application you have compile time which is uh it's not really I mean it's like fuzzy kind of present in in in other languages but in a language that requires compilation like elixir it's uh it's like the build time of your application right um a good analogy of this might be in javascript land where if you're running something in babble and you are transpiling it to javascript the transpilation process can be considered the compile time. So uh we have compile time and anything that is done in compile time is done once so if we are calculating a value at compile time or we're building out some sort of uh giant map like I am with echo fixtures that is done once and then that value is available to us throughout the rest of the life cycle the application. However uh things that are pushed down to compile time of course increase compile time and uh usually it is more complex to do things at compile time. We have boot time and uh this is these are things that are happening while your application is instantiating so uh when you type mix unix dot server right that is boot time and then when it becomes available uh generally elixir is very fast at this because it pushes a lot of stuff into the compile time um but you can still do calculations at boot time and then again that is a one-time calculation that you incur the cost once during boot time you're increasing boot time but you're uh theoretically going to be able to reuse that cash value now uh with a relatively low to no cost in the future. The reason why you may not want to push things to boot time is let's say your server goes down you need to need to bring it back up again right and I think push the boot time at that point could increase the uh the time that your your server can instantiate. And finally we have run time and run time are calculations that are being done on the fly so we have um uh this is the slowest of all of them but also generally the easiest for implementation um oh the thing I want to say about uh this is that uh what's really nice in that uh in elixir is that we have this really great interface um through the macro system and through um some other uh functionality to build out and push functionality onto compile time with um with relative ease at least in my experience compared to other languages and something I've been focused on over the past few months is thinking about how I can be pushing more and more things to compile time it's actually in the form of the way that I think in other languages as well um so that that's been that's been kind of a nice eye opener for me. Some other features of echo fixtures are serialization uh and for the assertions data enum so if you have like say uh rig green blue and you have multiple records it will just kind of loop over these these numerator values so the first record will get red second red record green through record blue fourth record red etc and then we have sequences so if you have user-1 at example.com uh sorry user-sequence number at example.com it will just continue to count over that sequence first many records that you're generating and again echo fixtures is under development so if this library and some of the concepts I talked about are of interest to you in your organization you know please uh please reach out to me I'm on IRC and I'm on uh on Slack and uh I'd love to chat about them. Okay so one of the last libraries I think I'm I got a little bit more time uh one of the last libraries um is something that's saved us quite a bit of time so this is a uh well it's a blob of JSON but specifically it's json api if you're unfamiliar with json api the specification um for lack of a better term it's very verbose and testing it can be very difficult and frustrating the reason for that is because json api is meant for machines to talk to machine not meant for people to talk to machines or machines to talk to people it's meant to provide this consistent reliable data schema so that you don't have to go out and create you know custom handlers or adapters on your client or on your server for reading and understanding what one machine is telling another um unfortunately when we are making easier for machines to talk to each other we're making more difficult for people to interact with these things sometimes so um the api that we ended up with for for testing this um is a library called json api assert and it makes I think very beautiful and this is like I'll show you an example of how much code is cleaned up in a moment um it makes you think very simple it takes a very simple concept of types and composability and distilled it down to in this simple api that gives us a ton of power and allows us to do something that would have taken hundreds of lines of code to do I'm not joking like this right here uh writing out all these assertions um you know hundreds plural maybe close to 100 lines of uh of code to write it out properly and that's actually very difficult to maintain it's not flexible at all um if we for whatever reason decide to change um the type of records that are coming back and the number of records it's just it's a big pain to go in and make these changes however with json api we can now just pass in the payload payloads being returned the original unadulterated payload is being returned out of every single function here and then we pass in the given record we expect to find within the payload so it will go in and determine whether or not a record of this type and id exists in the payload uh in this case the first line of the data section of the payload and if it does it will actually do a comparison of the attribute values and determine if okay okay this is uh this has uh name age and the values match on both ends that assertion is true and um a certain relationship will determine the relationship between different pieces of data in the payload so uh json api cert is um again available on our github projects and this is um a for the sake of fitting on the screen um the smallest example i could find so this would be a um this might be a way that you're writing uh kind of like the payload assertions you could try to traverse the object but what we found before we uh got json api cert was something that was easier was just to do we expect to payload and then assert that the payload of receiving is the same as the expected payload so this is uh i don't know how many lines but it's uh it's a few and now a json api cert this is all it is right so we are starting the data we start the relationship and that's done um again i need help so uh you may not care about json api it's a very ember type of thing um the uh but i think that the general concept is good right so uh testing out json payloads in general regardless of whether or not that the json api spec uh is i feel like a a difficult process so it'd be really nice if uh we had some sort of generic json primitive testing library that allowed us to pass in a json section and kind of traverse down in terms of whether or not it exists in the larger payload and then json api cert could use that to build on top of it if uh somebody doesn't tackle this i'll probably end up doing it myself just because i would really like it all right so finally in my last section some of you may be thinking like oh my tests are fast elixir what are you talking about i would pause it that you're not doing uh single page application development or client application development or maybe running um your your web apps through a um you know through a browser environment to test because that is still slow so at dockyard we build amber applications i'm going to talk about amber library it's like i'm a very very briefly um just as a matter of kind of uh explaining what what this process is and this isn't something that i've written this is something i want this is something i feel like would be a huge win i feel like it may only be possible in elixir uh maybe so um the way we end up testing our our applications at dockyard um we try to test them this way is you know we have you know the back in api and then we'll have the front and client two separate repositories and we try to treat in the same way you may treat an iphone application so you you don't have the iphone application uh source code embedded within your your server source code we treat uh our client-side application as a first-class citizen and within amber there's a nice acceptance test suite that will actually run through two units and do all the interactions so it will fill out form elements it will click on this it will wait for page to render it will assert uh whether this contents on the page etc and for any even somewhat mature application you can imagine that there are going to be quite a good number of acceptance tests as you're running through you know the happy tasks of certain things now on top of that i add in the additional complexity that uh i do not want to be running against stubs and mocks i i have been bitten too many times by that in my career um i have very much of the opinion that if you are um testing acceptance testing only with stubs and mocks then you're only testing the stubs and mocks you're not actually testing that it works in the real world so the best way to do this is for it to actually consume the server and uh the way that we have to set up is that during the uh setup clause of our junior test it will um it will do a blocking uh age act request to the server uh request that some data is set up make sure the data is clean uh insert the fixtures and then it'll return back you know 200 status code um at that point the tests are free to run and uh we have these nice as long as they're running in serial nice uh isolated test environments yeah as you imagine that on top of the the browser testing um doing it this way is just incredibly slow so hence the uh skeleton picture here so enter ember exam an ember exam is written by a developer at linkedin trend willis um if taking advantage of some stuff that existing two unit new or two unit to run your tests in parallel so ember exam allows you to very easily run your tests in parallel and uh he introduced this um over a year ago at a conference i was running in boston and i saw that like oh wow that that's a really interesting concept i wonder if you know there might be um some way to use this on a higher level uh last year at elixirconf i i grabbed jose and i just start pelting the questions and because ecto 2.0 i come out and there's this whole concept now around concurrent transactional tests we have the sandbox and we can actually uh tell our tests to run in parallel even when they're hitting the database which is awesome so um one thing i'd like to uh see if it's possible i think it may be because under the hood the sandbox just using agent as you have a name for the agent and then it's keeping the transaction within there um so within the q unit uh few unit each unit test can reflect upon itself to determine what the current name of the test that's running what if we could actually send the the name of the test over to the server uh it could use that name as unique identifier for the current sandbox and then we could potentially have our our acceptance tests running in parallel uh now if this is possible i i don't know of other languages or frameworks that have um this type of current traditional testing available to it that's why i think that this may only be something currently that's uh at least easily accessible and easy to implement relatively easy to implement in elixir phoenix i think this would be huge win and i'm sure that outside of ember and outside of uh client-side application development there may be other ways to to utilize something like that so i want to say that um as elixir developers and people transitioning to language we should not just rely upon the speed the speed is nice but there is a uh there's a lot more to the language than and just the speed itself we should allow elixir to educate us on how we can do more with less and the way to do that is we have to learn our tools and allow us to build for the future so thank you very much um my name is brian carderola i am at b carderola pretty much everywhere i'm the ceo of dockyard and uh we build elixir and phoenix applications for port thinking companies so if you're interested in working with us if you have a project a company and an application you're looking to migrate over to phoenix in elixir or if you're looking to build a new product to compete on marketplace with existing competitors and build on something that could definitely uh kick their butt uh contact us dockyard.com slash hire dash pups and that's it okay since since we're um right about a break time um we're gonna take a time for just one short question is there somebody with a short question no so unfortunately you're between us and the break so no questions give them