 This is my talk, all-child applications, lesson two. This bit was removed from the programme, but it's quite important. I'm hoping to go a bit further in your sort of standard elixir tutorial. So, for the people who've already arrived, hands up if you've already played with elixir to some degree. Okay, so it's about 50-50, so this is what we're going to try and cover. We're going to try and cover some topics around Docker, how you configure these services, service discovery testing, and then what it means to be sort of cloud native in 2017. Spoilers, so we're going to make a chat app. It's a classic elixir starting point. I've done this before, so if you just like code, if you go to this URL, which is my website, you can see the source code for this app, and there's also a video of me just going through the coding. In this, we'll talk a lot more about why we've chosen various things. So I'm Peter. It's nice to be here. I'm crowdhalers all over the internet, and I currently work at paywithcurl.com, so let me take a few days off to come out here to get a shout out. So from the beginning, not this far back. So this is how we start a new elixir project. So for half of you that haven't seen this, this is going to be sort of lesson one, so we're going to quickly go through setting up a new project. So MIX is the build tool for elixir. We start a new www project, which is going to be the web interface for our application, and we want it to be supervised. So if you were here for talk Francesco just gave, supervision is a pattern in Erlang, which is inherited by elixir. All of the OTP goodness comes through. So, ooh, that's better. Okay, so this is our first module. This is our chat module. PG2 is a module that comes bundled in with the core Erlang distribution. Anything which comes bundled into the Erlang core is a module from elixir. I apologize for showing you some elixir code, so soon after Francesco has just sold you on a load of Erlang code. If you have any questions, I'll take them as you go through if you want, because I think it's important that, you know, if you don't understand, you can see. So what we have here is we have a simple chat module. We create a room with a room name, so this is just a process group we're creating to publish. We find all the members of the room name, and we send a message to every single one of them. And to join a room, you just join the process group. So when you join the process group, anyone who wishes to publish to all the members of the group just has to pull out the list of all the processes, and we can transparently send a message to all of those processes. So this is the elixir equivalent of the send, so we're going to send a message of this format to this process, and here we're using the four comprehensions to loop through all the members, so every client in that room. So PG2, this is just module summary, their documentation is very un-jazz-y, but you'll get used to that when you're going through. One thing people ask about elixir is, do you need to learn Erlang? No, but eventually you'll find it useful to sort of investigate the stuff that's already there. Okay, to the web. So here is, who's familiar with API Blueprint? That's deafening silence there. So here's one in the corner. Swagger would be the other one. Okay, so I've picked the wrong horse here. It's called Blueprint. What this says is, it's a specification for our web application, so we have a homepage action, which is got at the index, we have a publishing message, which is a post to the index, and we have a stream of updates, which is get from the update path, and we're going to send down a stream of messages there. To use this blueprint, we can, so in elixir, so there's this racks layer we've got. We want a rackserver, and we want to use that blueprint to route to that. So those of you familiar with elixir may have come across plug. Plug is a middleware layer. It fits the role of rack in Ruby, ring, enclosure. Probably pick your language. They've all got one. Rack is something which is an alternative to plug. I've had problems with plug in a couple of cases, but these were related to some odd edge cases I was doing with streaming up because you have one direct access to the sockets to be able to read the body twice. If you're looking to just get started, I would say go with plug. It's widely adopted. It's much more used. For this talk, it's not going to be important. It's just going to be an elixir application we're going to deploy and we're going to look at Docker so we can move on. Ace is a server you'll need to run racks applications, and that fills off our ecosystem here. The one nice thing about Ace is it was built with HTTP2 in mind first. That's something you're interested in. The Ace server is a good one to look at. Here's our application thing. We go back to this one. This says it's a racks application. We're going to start it with the Ace server. Start link is a pattern that comes up a lot on elixir and herlang modules. You call this to create a process that's going to run your code connected to the original calling process. This is also how you build up supervision trees. In racks, this is what a controller looks like. We say, again, it's a server. We have a single handle request function. This is something, again, patterned from the rack framework. You get a request. You get an original state. You create a response. These are pipe arrows. I think they were taken from F sharp, possibly. They are something on top of herlang. One of the niceties that elixir gives you. It takes the result of the part before it and feeds it as the first argument. Here we have a function which will set the header on a response. Here we're going to add the content type header to a response. This function has a first argument which is the original response. Then we set the body. This will also take responses of first arguments so we pipe it through. It's a really nice way to compose parts of your application. Here's just another example of the controller. Here's how we publish your message. These requests that we're given, we have a bit more information. The request has a body which we can just pattern match on to extract. We don't need to fetch it in any way. A configuration, we have a room name which we can then pass to our publish message. Again, we're using this piping pattern to go through an issue of steps and setting our response. Final one, so I might skip over. We can go through this, but this is sort of things to check the documentation. This is how we do a streaming response. We've set a response for our original handle request. We get a message to this point and we say a response. At this point, we just send the response that we have and we set a new state for our process. Here we've set the body to true which says there will be a body in this response but we don't yet know what it is. We then have a handle info callback which is whenever this long running process gets a message from another part of the system. Here we pattern match on the type of that message so here we can see it's come from our chat module and it has a certain message and we build it into an event, so a service end event. We then say we want to send this part of the message so we add it to the body of our response and we keep streaming and we set up the new state. That's our full chat application. This is the final part of the puzzle. This is an application module. This is where we declare that supervision tree which gives us the fault tolerance and a lot of the reliability that Elixir gives us. We have a list of children so we want to supervise. Here we say that that children is also a supervisor. This is a slightly cumbersome API but in the latest version of Elixir it's being reduced so it's more succinct. We say we want to start our WWW module with some configuration and some options. Those options are just a port and to say clear text. One caveat if you use clear text you will not get HTTP2 support by default. Most browsers will only use HTTP2 over HTTPS. We started that server and then we say here we have this strategy. These are a few declarative things. One for one says if this process dies for any unknown reason it really doesn't matter anything can happen. We're just going to restart a new one. All existing connections will be lost which is a shame but that shouldn't happen very often. Due to machine failures we're just going to start a new one and go back to where we were. The service is still up after a temporary break. That would be less than one to building a chat application. This will run, this will take many concurrent connections and will stream all between them. The reason I call this less than one is because this is on one node. When you start this locally you get one node. That's not really what Francesco was promising us but this is what a lot of your introductory blog articles will give you. We move on to lesson two. In my abstract for this talk I was going to say is Docker even useful? Maybe we should use it, maybe we shouldn't. We're going to jump in and use it. We're going to sort of validate it later. We're going to discuss a bit more but for now Docker, we're going with Docker. Who's got Docker installed on their machine? Hands up. Excellent. Who has Docker in production? Who uses Docker in production? OK, so a smaller number about a third, maybe two thirds have it on their machine. You're exactly the audience I want to be talking to. This is good. Docker Compose, or if you're familiar with Docker Docker Compose is just a way of running multiple Docker machines at the same time. So here's a simple Docker file. This bit you apparently all know but we'll go through it. We start from an Elixir base image. It has Elixir installed. These are essentially part of the boot build tool ecosystem. They're worth putting this line in. We then copy our code in and we run this start script. And our start script is just to run Elixir with a certain node name and a certain secret cookie and we run the mixed task. We'll go through in detail sort of what the node name and the cookie give us but this is a good starting point. This is the Docker Compose file which will allow us to do that. So here we're going to run one www service. It's going to be in the www file. We use its Docker file and we set an airline cookie to a better secret than that. And we want to expose our 8080 port. OK. So in development now we have Docker. This is how we start all the services. So Docker Compose will just start every service in that file. Docker Compose down, stop every service in that file. And this is how you run essentially in the Docker Compose environment. So Mix is the build tool for Elixir. And once you get used to this you essentially run everything inside some Docker Compose process or another. Or maybe not process containers probably a better way of saying that. To build with Docker we tag an image. So every time you run Docker Compose up you'll get an image which is made from the most recent version of your code. So I'm a crowdhaler all over the internet including on Docker Hub so this says I'm going to tag an image with my name like in my repository. I'm going to call it www. We then push it up and so it's now available to use. And we can deploy it. So this is where the options start getting a bit bigger. There's quite a lot of deployment solutions for Docker. I'm going to use Docker Cloud not because it's the best but because it's the simplest with sort of the native ecosystem in Docker. And I don't want to sort of overload you with things to look into. You can have a great afternoon reading Rancher vs Kubernetes vs etc. later on if you want. So this is our startup wizard to run our application. We obviously need some machines. So here we're starting a node cluster. I'm going to call it chat. We're going to have them in Bangalore. I've connected to DigitalOcean. We want three of them. So three is a good number to get started. This is the configuration for our service. I'm saying that I want to use the latest version of that image. I'm going to give it a service name which is www. And I'm going to add it to the stack chat. These are concepts within... So there's concepts that are migrating to Docker but exist only in Docker Swarm and Docker Cloud. The Docker ecosystem some bits of it are much more stable than other parts. So the creation of images, the tagging of images and the repositories, I would say most of the stuff I'm telling you here is likely to be true next year. This is far more in flux. I'm very hopeful that it'll settle down. Sort of, again, it's the discussion between Kubernetes, Rancher and so on and so forth. So here a stack is a collection of services running together. So they would be your whole application. And then a service is a set of containers all running the same image. So they're essentially just replications of a given container. And that's again for scalability, resilience, et cetera, that you'd run multiple instances. We're going to deploy here. So this is an unusual deployment strategy. We're just going to have one container running this image on every single node. That's because we don't have anything else at the moment, so there's no point having less containers than nodes because then we'll just have unused nodes. And because we're going to cluster them all together and Erlang gives us that transparent message passing, there's really no point having more than one container per node. It doesn't give you anything. It's just an unnecessary division in your setup. We're going to say auto redeploy. So if I push a new version of my image here, it's just going to roll it out. We then need to expose the port. So AD80 was the one which I've said internally to run. 8443 does a secure service. And then this 4001 is some monitoring tools which we'll show later. And then here's our secret cookie, which is set. This is what it looks like on Docker Cloud once it's running. And then there's this stack file thing, which is again part of this sort of menagerie of terms which exists. This stack file looks like a compose file. You can run a compose file but you can't run a stack file as a compose file. I really hope that if you've not looked at this already and not long this difference will go away. One of the main differences is we're here, we're not running, where is it? So yes, we've set a particular image. We've not set a build path. So this will use the image that we create in a separate step, whereas Docker Compose allows us to use source code and just run. Which is much quicker obviously for local development so you change the source code and then rerun. This is to remind me to be calm and slow down. I have no idea how far through we are. Okay. Any questions so far? Okay, keep them for the end. So why Docker? It's a good question. I asked it so I'll probably answer it now. It's hot right now. Docker is a trend. We should all jump on that. This is the Google trend. So this red line is Docker and this blue one was virtualization because that was what we did before Docker. Last year this graph looked interesting because you could see virtualization going down. This year Docker is so far ahead that you can barely even notice the lump that was virtualization in the past. All joking aside it is good to fit in. Docker is meant to give us a sort of reusable base to work from and that mere aim in itself is a good reason to use it. It allows you to experiment with Elixir without introducing the existing build process which is why I say that you're an audience to talk to because at the end I will show you some setup where you can get started using Elixir without even having to bother installing Elixir on your machine. We just use the Docker file. If you've got Docker, Docker compose there is no other setup needed. Again you get to do quick experimentations so if you have an Elixir project running for me to go maybe I should use a Neo4j Datastore or anything. The whole thing is just this nice way to package things in. If you want to go that way the microservices future where we all run hundreds of difference of languages to keep your developers happy you can do that. This is just more good Docker goodness. You get to reproduce environment locally so you can start a fleet of containers so this is what Docker compose is really good for you can have multiple things going at the same time before if you wanted to test a multi service sort of setup that was quite a challenge. This one I think is quite an interesting step. You can model Docker free production environments so even if you're not sold on Docker you can use Docker just use a basic say Ubuntu image from Docker. Use it locally so you can start up three Docker containers you can deploy apps to them. There's nothing stopping you then deploying those apps bare metal or well not bare metal so it's great to your Ubuntu VMs in the cloud. I had a Docker setup which was modeling a Heroku setup for a while it was just a nice way to bring in a database and I had a Docker setup where I was experimenting with clusters because that's a challenge in sort of an ecosystem you want to run with clusters but they're hard to do so I spanned up four or five Docker containers and was able to just play around as if it was real clustering. I pushed to digital ocean machines or Heroku and again this diverse selection of pre-built containers so adding a database is find a database image set some values, go and you're done. No version managers this is a point I really like so there's no sort of managing I've got Elixir 1.2 or Elixir 6 because that's managed in Docker again it's just a different image so it's just part of a sort of configuration that you've already got I've taken this so far I don't actually have Elixir installed on my machine I only use it within containers that's not actually entirely true I've actually got the master version of Elixir installed on my machine so I can use this new formatter tool but I can use it on old Elixir code bases so that's a very nice sort of freedom I've got downsides obviously there are some downsides with Docker one of the main downsides is that you don't know what to do and if you don't know it already there's time taken to get to know it the best way to mitigate this problem is to not get carried away just keep it simple the Docker file I showed you anyone who knows Docker could probably point to several inefficiencies in it that's fine it got me started if I need that efficiency later I can add it after you know I've proven the system one of the key things to this is to strip down operating systems so there's Alpine and there are pre-built Alpine Elixir images and one day I thought I would love to save those few kilobytes I'll use Alpine Elixir and everything was going well until I wanted to debug so I shelled in nothing that I knew was there on the machine so you had to look up all of the Alpine documentation nothing in my life was made better by saving those kilobytes so again, once it goes to production people who know what they're doing you can bring Alpine into the mix and the final one on that is use the official Elixir image it's not built on Alpine the official one just don't get yourself lost in the Docker maze it's really easy there's a lot of people talking about it just keep it simple secondly, don't get carried away by microservices if you can't build a monolith what makes you think microservices is the answer there's an article about the microservice ball of mud if you can't design a good system with modules for separation you're not really going to improve your situation with microservices microservices might be your answer but microservices doesn't mean Docker and Docker doesn't mean microservices so those are two completely separate things and you should remember that you should keep your freedom and just not, again, don't drink all the cool aid in one go a Docker file with a single image running on a single machine is still getting you started with Docker so this final one about Docker I think it's the final one it's neither a plus or a negative immutable infrastructure is another sort of trend in 2017 which is encode all of your steps to set up your infrastructure so this is the Docker compose file so when you run Docker compose up it sets your system as near production as possible the reason to do this is if you have a problem or someone else needs to start your work you give them the Docker compose file they run Docker compose up they're back to where you were this means no relapse so relapse is the short for the updates that Francesco introduced in the talk before where you can live install new code and then switch through the processes he alluded to the fact that it's not easy in fact I would go to so hard to say it's really hard don't do it is a solution that works you can have blue green deploys you can have two versions running at the same time there are other solutions again if you want to have the time to look into it you're more than welcome to Docker would be a great place to practice but you would mess up a lot of the advantages you get with immutable infrastructure and I think that should be considered as quite a big negative actually so that was Docker do I need it? No could be useful keep it simple just get it working move on and it will change a lot about how you work mostly in a good way but it does change quite a lot of things so service discovery any questions so far oh yeah okay how do you do TDD I mean there's nothing about how it's done which changes instead of running mixed tests you run Docker compose run www mixed test so one of the things you can do with Docker is you can mount local volumes in so when you mount your local volume into a container you can make a code change and run a single command that command is quite long so I would alias that in my local machine but that's an issue you can solve but again if you make a code change and you've mounted out volume and you run Docker compose mix test you'll get the answer immediately and Docker run and it shows you the output of the shell so you'll see your test results you'll see any debug information it's exactly the same as running locally really there's no reason for your process to change anything else okay so service discovery here we go so one machine cannot have fault tolerance this is one of the things the development of airline got to quite quickly was if you need a fault tolerant system you need two machines it must be concurrent those are not negotiable pieces of information a lot of that leads to the architecture of airline just that fact alone if you have more than one machine you need to know how to find it you need to know where it is on the network airline comes with this I don't know if it's a module it comes with a distributed airline so this is built in and of course it's built into elixir whenever I talk about airline it's really just sort of like this you can just translate airline to elixir and you're fine so here we to do distributed airline here we have this command again so we start our mixed project with a name and a cookie within that shell you can connect to another known node so here we have so if our host name was something else and there was a machine which was at WW1 DNS and it had an airline VM running which was called app you would connect just like this I've not solved the problem I was going to talk about because how do you know what that is that's the question and then once you've connected you can find a list of all the processes all the machines or nodes you're connected to the nice thing about distributed airline is if A connects to B and C connects to B A and C are automatically connected it's a fully meshed network so you don't have to find every node in the system you just have to find one that allows you to simplify a lot of the search process this is the standard way to configure that list of nodes you're going to look for with airline so you've set a list of I just copied this straight from the documentation I don't know why their project is called cave I could not find the answer to that so CP, CP2, CP3 at cave we're going to try and connect to all of them and we say we're going to sync nodes mandatory so we're not going to let the system start until it's found CP2 and CP3 at cave picture of a cloud because we're in a cloud environment things change much quicker so distributed airline is really good it gives you the transparent message passing they were way ahead of their time with distributed systems probably the area you'll notice the sort of friction the most is the environments they were working with were not the cloud environment we have now so the things they were working with were not as transient as we have now so this is the biggest thing which needs smoothing out and there's work on it so every month it's getting better but it's one thing to be aware of so service discovery in the cloud things are always changing you don't know the service locations ahead of time so you can't just provide a list up front and even if you can provide a list up front you don't know the order they're going to start in you don't know how long they're going to be there the whole point is you can lose one machine without taking down the system but that list of required nodes is kind of exactly the opposite it requires all of them to be there so if you're through with or without Docker so if you decide to start a whole bunch of virtual machines and put your elixir project on them you would still need to find the other nodes there's also a slight distinction between node discovery and service discovery node discovery so on the Docker compose well on the Docker cloud we had the concept of a service and the service was several containers all running the same code if you want to cluster those together I would call that node discovery means running the same code as you and forming a cluster a service discovery is finding some other service that you want to call out to and you wouldn't necessarily want to find all the nodes just anyone that can deliver that service to you is good enough so node discovery is what we're trying to solve here Docker cloud so Docker cloud gives you a whole load of nice conventions our service was called www in the network that Docker cloud gives us if we call that URL it is a URL we will find one of those machines and it also keeps a DNS entry for every single machine every single container running so in this case we can just keep trying to connect to www1 and www2 and that will keep our node cluster live so we're not going to do anything if they go down that's one of the things with the cloud environment we're not going to try and rescue if one of them goes away we're just going to trust that when they come up new machines are going to connect to one of the first two and if we have 15 machines come up, if they all connect to the first one then they're in a fully connected cluster so Docker cloud gives you these discoverable host names within a container Docker compose does not so this is a very environment sort of specific problem so Google container engine that's Kubernetes underneath then there's EC2 or ECS which is the AWS container service they're all different environments and they all have different solutions at the moment so I would point you to this library lib cluster again this is one of the brilliant tools this is not part of the standard distribution you will have to include it but it has, this is just copied by Baton from its readme standard distributed airline facilities UDP gossip compose if you set it out the right way then there's Kubernetes and there's also Rancher which is yet another orchestration solution so you can form a cluster on any one of those configuration rules of configuration there is only one environment that matters production so don't configure as much as possible in the code I showed you we listened on port 8080 it's very easy for you to write code sort of comes naturally read from an environment variable what this port is style on that port we control the complete container we can say that this container will always listen on 8080 we can hard code that in we don't need to configure that it's then just a problem for the compose file how we stick them together you can route to a specific port it's amazing how far you can get without configuration so much so that when I gave did the walkthrough of this application I was worried I'd managed to get rid of all of the configuration and wasn't able to show how to set up configuration the next thing on that is always follow production so if on production you're reading from an environment variable set up the environment variable in your Docker compose environment really try to minimize the difference between them with Docker compose you get so far with this you have a database at a URL you can choose you can then set up all of those things and this is my own personal advice avoid relying on named environments because there's going to be more of them dev prod tests but then you're like well staging that should be prod but obviously we don't want it to move real money so it's going to have say we have a different stripe end point well CI well that should be test but we don't want to run the slow test or we do want to run the slow test so they look like each other but they'll always end up being some difference so just have a list of environment variables set them for each environment and don't bother trying to name what environment is surface testing this is a term basically this is integration testing I call it surface testing because it really emphasizes you should test an interface that someone cares about so I think I re-ordered these slides so surface testing will go to the codebase here so in this test suite I'm using HTTP poison so that's a client library to call my www service and test that it does what it should do I run this in a separate in a separate mix project so it's a completely separate elixir application and what it means is I can actually replace my elixir application with say a go application and not change the test suite these tests are not that much slower you can do things massively in parallel the system optimizes I like loop back interface but there's no abstractions at this level people if you're writing an API the people who are using it care about what it does when you make certain requests so if you make that request and check that response this approach does become a bit gnarly obviously if you have huge amounts of HTML coming back and it's not a replacement for unit tests if you want a TDD and the unit tests are helpful then they're still helpful to do the nice thing about this is if you go through more advanced build steps so Erlang you have the ability to make reduced releases they won't have the test code on pardon me and so you can't run the test against that release to check if it works properly so after the appropriate amount of deliberate pause you can run of course surface test against that release and you can actually run your surface test or these integration tests against your running system so we have a cron job at work where we run this test suite against our staging environment every 15 minutes it does this rudimentary sort of pressure testing it also allows us to check all of our metric information so we can see that traffic it's a very odd traffic pattern you get a 15 minutes bike but we can check that tracing is working we can just check a variety of interesting things because we've got a system always running so cloud native cloud native is a term I like distinctly because it's not micro service it's quite likely that you'll be developing in the cloud in this day and age it's also quite likely that you don't need to have 100 micro services cloud native just makes a few changes the first one being that web is really the only interface that matters true you can shell in to your running containers but the more you have the harder it'll be to find the one you need so if a service is messing up there could be five instances of that service running and you'd have to shell in to every single one to check or debug you might just be debugging the wrong instance so getting as much information out so you can aggregate it is really helpful and that is going to be perhaps you should say an internet interface it doesn't have to be a web one but it will be over TCP, over the network that's set up in the Docker environment and documentation is crucial documentation was always crucial there's not a surprise but once you have more than one service the contracts between them are the hardest thing to test like testing one service is a lot easier so that should be very quick to go to and that was the reason behind the blueprint generator I showed earlier so that racks blueprint project will take a documentation file it will actually pass that documentation file and look for the appropriate controllers rather than the other way around where you annotate your controllers after the fact with documentation and it works quite well and it just keep pushing it forward so it's one of the things that the elix community is very keen on they have module docs inside the code so you can document your code and just keep pushing that forward because eventually you'll have 15 services and one of them will have been made by one person who's left and you just know nothing and you'll just have to rebuild it this is Wab server so one of the nice things about the Erlang well beam is you have an awful lot of introspection tools at your disposal this is just that set of interfaces exposed over a web API so we can look at load charts we can see all the running applications so that was our supervision tree so you can see all the processes under a certain tree and this will just work so that was why we put the 4001 port out again you can configure it I didn't 4001 is a perfectly acceptable default and it gives you a lot of information from day one ok so I've said this a few times it's a bit of a bugbear don't need to do microservices Docker is very useful thanks so this is me on the internet CrowdHailer at Twitter GitHub one final thing I wanted to leave up here is this project called Elixir on Docker it's a template so you just clone it to use it you have the Docker compose file locally Macs application I do use that at work it's not a toy project we are using it in production but be aware that you will be slightly outside the main Elixir mainstream but if you don't have Elixir installed or Erlang installed but you do have Docker installed you can start with this straight away you can clone it, you can run it, you can get experimenting it has web server setup so if you go to the right port 4001 you'll see that it has a surface testing suite setup so you can run that it has instructions on how to run it saying what up and it has a few more niceties so the Docker files are slightly longer than the ones I showed you they have the volume mounting and they have live code reloading so if you make a change you can just refresh the page and it just keeps going through so that's a few niceties the main reason for this project is deployment is still an open question particularly in the Elixir environment and really it's a sort of focus for my discussions on how we are going to keep going forward so yeah if you're interested in talking to me after this but if you want to find me later on just put something on this one this is a great place to go for it thanks so yeah any questions now we've yeah so your question was I think so with the surface testing how does that integrate when you have multiple things running so those are service tests they just talk to the service URL so they're pointed to any of the running containers so to see everything that happens you need to obviously aggregate the logs from all of those services locally when you run Docker compose you can so I didn't go through everything on Docker and Docker compose but if you do Docker compose logs in a service name it will show you just the aggregated logs from all of the instances of that service and that's a pattern you'll use in production as well so when we run our tests in a staging environment if they go wrong to debug why they've gone wrong we are login to essentially a copy of our production logging setup we use Kibana and we use so we don't use Docker cloud this is good for examples we use ECS and that has a log aggregation mechanism and sort of log aggregation is another topic but the nice thing about the integration test is that you can check that you've got a sensible setup so in many ways I've not answered your question it's an open question how you put them together it's not an elixir question that's an important thing so if you've solved it for anything else it's the same but yeah so Docker provides facilities to write logs to the disk of the machine and then AWS we just have their standard log aggregation which we use to Kibana to look through so we can, if our tests start failing it becomes a real debug exercise so that would be a customer getting something they didn't want so we then go through our logs and start trying to debug from our logs and it's really good for finding if third party services are down and stuff like that but sometimes it is quite tricky to go through all the logs this is why I say you really don't want to have 15 services to check if you can get away with it we do these things for scalability and fault tolerance it is harder the more services you have running and there's not really any getting around that it's just about testing your setup that's good yes so so I don't have all of these things setup we currently we have to manually run the build again before we deploy it so the code reloading mechanism doesn't recreate the image that would take longer it is something I slip up on occasionally I make a load of changes and if I rebuild fortunately it goes nothing's changed so you do then have to run the build step and also I showed you pushing manually to a hub but there are CI solutions so you can have Travis run your tests, build an image push it to hub Docker cloud will redeploy based on that new image all of these steps exist again it depends on how far you want to go do you have a Docker setup ok so my answer to you would be again don't push and so on they exist and if you want to we can talk about CI solutions that exist but yeah again just keep it simple keep it streamlined to get started any more questions if the base image changes essentially you have to create new versions of all the images on top of that that is so Docker is built on these concept of layers and your the first line of our Docker compose file was um I can't even remember what it started with how far back have we got to go is this one so there's this from line so each line adds a new layer onto your image so if my code changes I need to rebuild from this point that's the main reason you do this first is because this very rarely changes so you can run it once and whenever the code changes you don't need to redo that step if your base image changes if you care about the changes like it's a security update to your base image rebuild the whole thing let's check the time any more questions thanks for having me