 All right, so again, questions for all of you. Give us a brief overview of your current deployment strategies and maybe a little bit of reasoning about how you chose the aspects of it that you did. Let's start with Desmond. Hi, so I'm a freelance consultant and I have a lot of internal applications that I spend up in my spare time and kick out to people. And I deploy these all under a single umbrella app onto Linode instances that I manage. So bear with Ubuntu, and I use Distillery, and shit, now I'm blanking on it. What's the other one? Edeliver, thank you, man. That's very well done for a reason. So I use Distillery and Edeliver to deploy my umbrella app to an Ubuntu instance in the cloud. There's no clustering, it's just a single thing. Cool. At Motel, we're a digital product agency, so much of our deployment has been derived out of having to hand off projects to non-technical teams. So our internal infrastructure has been built on top of GitLab and GitLab's runners. Our review apps are spun up with Docker in GitLab into Kubernetes. But at the end of the day, we deploy everything to Heroku. Hi, I'm Ben from Grindr. So we have a lot of stuff in our stack. We've got Java, we've got Elixir. So we're using Elixir for some microservices and for keeping track of user presence and online status. So we do continuous deployment. We deploy every day. And we're using make files with Ansible and Docker for actually doing the whole deployment process. And yeah, do you want me to go into detail about that? Or that's next. Say something else. All right. You really want to say something else. Yeah, yeah. So I mean, we're not using Docker to run in production, but we actually use Docker to build the artifact for the target OS because with Elixir, you have to build it for your target OS. And we're also using to actually test the deployment. So we spin up Docker to deploy onto that Docker machine to test the deployment itself. And the make file, I mean, they're wonderful. They glue everything together. 40-year-old technology and Docker's 5-year-old technology, so it all comes together. So how is your Elixir deployment different or the same than other standard DevOps procedures? Do you find that it varies a lot or is it often very much in line? Can you define other DevOps procedures? No. Sure. Ours has been, there is a tension in between the early way of doing things and then also modern DevOps. And modern DevOps, we worry about snowflake servers that will eventually melt because they're unique and special and it takes a lot of work to configure them. Like as an agency, like in 2017 with a team of five, we handed off, I think, six or seven Phoenix applications. So really short time frames and intense work and then a delivery of a project of a set of assets. So in that case, like those finished products, they have to be something that a non-technical team can run and scale and so we've been working on managing that tension and it's an evolving process. But right now, we do deploy our production apps to Heroku. Actually, just with GitLab CI, it just pushes with Git into Heroku. And then, but I also can build a distillery release in a Docker VM and push that into a review cluster like internally. So we've been doing our best to manage that tension in between and I would love to get to the place where we have the infrastructure and we can figure out the client relation to where we can deploy the early way. But we just haven't been able to do that yet. So we've really been trying to cut out the DevOps step in a certain sense. So we've moved all of the procedure of deployment into the repository itself. So for example, we have a microservice called the Profile Service and it manages user profiles. Everything about deployment is in that repository. So we don't have some repo over here that defines the deployment, that defines how it works. We don't have external processes. One of the reasons we use Docker for building the artifact is otherwise, you have to build the artifact on a Jenkins box or something. And so that's just another step in the process that we don't need. So by removing all of these different things, essentially we're saying, look, you guys are engineers. Part of your job is to actually define how it gets deployed onto a box and what the dependencies are on that box. Just make it happen. And then we're cutting out the middleman. No offense to any DevOps people in the room. We find it's a lot more efficient to do it that way. So I don't know if that's modern DevOps or if it's, I don't know what modern DevOps is. But that, I think, is the most agile way to actually get things deployed quickly and easily. I don't do any DevOps. I have a hack-together Ruby script that runs a couple of bash commands that say, build a release, push release to production, restart production. It's about as simple as you can get. I don't deal with CI. I build my production releases on my production host. My traffic is low enough that I can handle the additional load on the CPU. And that means that I don't have to deal with putting my secrets someplace or having the app not have environment variables there at compile time. So I just wiped away an entire class of problems. I have, I don't say unique setup, but I'm not running a large app under heavy load and dealing with a big team of engineers. So my situation would probably not work at their companies. It works pretty well for me to have several smaller projects at production. And I know you started mentioning this a little bit earlier, Ben. How does testing come into play at the time of, before, and possibly during deployment? Yeah, absolutely. So one of the things about continuous deployment is you cannot do continuous deployment if you don't have good tests. You are going to shoot yourself in the foot. And I think they really play on each other. So continuous deployment keeps you honest with your tests because you're going to know if your tests are bad because you're going to fail in production. So we do TDD. We also have full, large integration tests that are also contained in the repository itself. We run those so that we actually launch the application in Docker. We use Docker for told our database as well. We talk to the database, not through some mock database. And that keeps us pretty sure that everything is working correctly. And like I said before, we actually test the deployment. So we actually, from the test, from our actual test, we call our Ansible scripts, deploy onto a Docker image, and test that, hey, if I do the health check, it returns OK. So that gives us the confidence to really do continuous deployment and know that, because before continuous deployment step, it's going to run the tests. And if tests fail, we're not going to deploy. And Motel with GitLab, we still host GitLab, which means that every time you open a pull request on GitLab, they're called merge request. We run our tests. We also, in parallel, run mixed format and check for formatter errors, and then run credo in strict mode. And actually, just in the last couple of weeks, we've moved that running process off of the same Kubernetes cluster that was running GitLab. And I've moved it over to spot instances on EC2, which is actually really easy to set up. But what that allows us to do is, for 10% of the cost of maintaining or reserving the instances, like on demand, I can just pull out a 16-core or 20-core machine, run the tests, or in this case, run my type specs from scratch. So I can run mixed dializizer, and it will run and build the PLD file in four minutes, and then shut down the resources. I don't have to pay for them anymore. So then that's sort of how we test before we merge. We also, part of our keeping our infrastructure in the repo itself, there's also a Docker file in there. So we'll build a distillery release with Postgres, and then using GitLab and GitLab review apps, put out a review app, as you would expect with Heroku, allow our design team, our client, to test the review app, test the features as they're being worked on. And then when it gets merged, it's all then deployed continuously to a staging Heroku app, which isn't as exciting. I'm still working on that part. So that's so we put the GitLab and the runners allow us to in parallel test a bunch of different facets of our applications, and then share that infrastructure either in the repo itself if it's unique, like a Docker file, or it's across all of our applications. So that infrastructure is transparent and is, whether it's regardless of the client project. So as I said, I have no CI. I run my tests after I, before I commit. And if everything's green, I push it up, and that's that. My infrastructure is they're pets, not cattle. I have my one Linode box, which is four cores, eight gigs of RAM. And that goes down from time to time. Linode has random Linode updates. I would say three or four times a year. And it's a pain. They give me a window, but then my app goes down. I haven't been that conscientious about sending up that CD scripts to reboot my beam, which would solve the problems about the thing going down. But that's sort of particular to Linode. I don't think DigitalOcean or EC2 would have a similar problem. But so my infrastructure is probably more fragile, but also more stable, because it's just the one machine that I still have. So do we have questions from the audience right now? Hold on. You mentioned a review app that's new concept for me. So how is it different from staging and QA environments? It's similar to a QA environment, but it is specific to a branch that is going to be merged in. So after, it's another form of code review, except it's feature review. So instead of my designer having to maintain Elixir on the system, learn how to use Git, and then pull down a feature branch, boot up the app, and review it locally, he can, there's in the merge request and the pull request, after the review app runs, there's a link to a subdomain of our GitLab instance where the Docker container is running. And it is meant to be as close to production as possible. So it runs the same seed scripts as our staging environment does. It allows non-technical users to test an app. Any other questions? Here we go. I have a question from Benjamin. You say that they have a lot of microservice there, and they have its own lifecycle. But what do you need? We have multiple service that know about each other, like how you're actually handling that, how you spin up groups of stuff that you need to test together, and since you're very independent. So I think your question, how do the services talk to each other? How do they know where they are, how to contact them? So if they know about each other, if you need to test them. So maybe you actually need to spin up, too, for testing a specific feature. But if your voice is ahead, it's on lifecycle. Imagine you have a user, but the user actually makes a call to the microservice, whatever, information. And if you need to test that on CI, are you actually spin up multiple service in order to test this feature, or are you mocking the request between service that know about each other? So I'm not clear on the question. So I don't know if I'm going to answer correctly. But in production, the services, we just use DNS. We don't have anything fancy. So environment variables, so when we actually deploy, we specify, so for example, if a service needs to know where the database is, or where one service needs to know where the profile service is, we can just set the environment variable at deployment time. Does that answer your question, or it sounded like your question was about testing, also? More about how we're actually handling all those microservice in deployment time, maybe they know about each other or not. But you say that you have this identity microservice, but this identity microservice actually talks to another microservice. Oh, no, no, no, we don't have an identity. So are you talking about the profile service? Yeah, profile service. OK. Yeah, so we just use DNS. So we don't couple. So essentially, we have a bunch of machines in AWS, and there's nothing special going on. We deploy a service on machines A, B, and C, and we deploy another service on D, E, and F. And if they need to talk to each other, they use DNS to actually contact each other. So it also depends, are they behind the VPC or are they public? And we have load balancers in front of these services as well. So we're not using console or any of these solutions for dynamically telling services where other services are. We just haven't found the need for that. This may be a quick answer. One of the things that I haven't seen us do very much as a community yet is talk a lot about how to write distributed, like actual distributed node type of apps with any elixir. And so because of that, we haven't really talked about how you would deploy those types of things. I'm wondering if you guys have seen any apps that are like that where you actually really have fully distributed sort of early elixir nodes, and if you've seen any advice on how you would do those kinds of deployments. So is your question about if there's state on the servers, how do you do a deployment without losing that state? Yeah, so we actually do that in one of our services. And so what we do, we just had to do a custom thing. So when we do our rolling deployment, so say we have three servers, we actually make a call. And this is all in the air lane beam stuff. So the servers can communicate with each other. So we just drop down to that level and send the data before we go down, or it's the other way around. I think we call before we go down from the other server. So that one now has all the state. And then we bring up the new one and then give the data back. But it's all using the beams abilities to basically talk cross-server from the process can talk to each other. You know, it's a simple function call. If you send me your email, I can talk to my coworkers and give you details on what exactly we're doing. Other questions? I think this might be related to the previous question somewhat. But one of the bullet points about what being new to you looks like, and one of the highlights that you read about early on is the hot deployment codes. But it seems like nobody on the panel is actually doing that or has a use case for that yet. I do that. What do you do? So can you speak to maybe any particular challenges or what your use case is that I'm curious? It's mostly for fun. A lot of the tutorials around hot deployment, it's weird because the language is pitched as Emma pointed out, all these nights of uptime. Less than a second every 20 years, which is like, I have never seen an app that exists for 20 years. But then when you say, OK, so how do I hot deploy these people, they start to back away. Oh, well, you might not want to do that. You have this code change function, you're done servers. But here be dragons. Just restart your app, it's simpler. And it is simpler. Turn it off, turn it back on again, removes the whole class of problems. How do you restore this state? I think it's a little weird that it feels like a bit of a bait and switch. And I think then your app stops being stateful in that way and then becomes a cache layer on top of your database. There's things you can do to keep state while your app's running and so forth. But I do hot deploy is, again, mostly for fun. My use case doesn't need that kind of uptime. I'm not running a big e-commerce platform where downtime is money. But it's fine. With distillery, it's just instead of saying build release, you say build upgrade. And it takes care of it for you. If you are changing the shape of data structures, then you have to think about, OK, well, I need to implement these code change functions in my Gen server so they know how to migrate the objects that they're holding onto. In those cases, if I'm running a database migration or whatever, I turn it off and turn it back on because it is simpler. It's not impossible to do. It just takes a little more thought. And I think that might get complicated as the team gets larger. And you have to be more mindful about what is going out and what effects this can have across the system. But for simple changes and additional features, it's fine. It's easy. It's transparent. Yeah, so we don't do it because unless there is a real need for it, to us, it seems like it's not worth the risk because if you can really screw up your deployment by doing that. Of course, if you're Ericsson, there was a reason for this ability because you need to do live updating while phone calls are being routed. But we don't have that use case at this point. But I'm sure you can find the use cases where that does open up a whole range of possibilities. But it's another layer of complexity and more stuff to worry about. But it can be fun. It's cool. It's cool to do. Well, what we do, in the Mix Config, I think it's like there's a version field or whatever. We just set that to get described, dash dash long. And then when we make a tag, so when we actually do our releases, we don't release with continuous delivery. We release the most recent tag. So we're not actually releasing necessarily the head of master. We're releasing the most recent tag. So say a tag is 1.1.1. Get described long, wherever you are, is going to return 1.1.1 because it's the most recent tag. And then it returns your commit hash. So that's what we set our version to. So there's always a version and it's automated. And it's tied to an actual tag version, which you can then tie to what features are included in this version. So in my Mix file, I have that version read from a version.text file. That's at the root of the repo. And the Ruby script I mentioned earlier that cobbles together the whole release. As part of that, it reads the version and bumps up. I can set a flag when I run the script to say patch major minor. It will handle that and then create a commit, add a tag, and then proceed with the actual building the release and deploy it. Questions? Specifically this deployment, but it might apply to a couple of you. One of the pitches around Erlang is certainly around the thing you were talking about, the cross-node communication. But as soon as you do that, you're using a different model than if you're in a large organization with a bunch of different languages involved, they're almost surely communicating over HTTP and not just how they operate. What approach do you take to that? Is it worth employing that just for the purposes of doing it? And then how does it versus is it worth sticking to being standard? I realize this is not exactly a more question. I apologize for that. But it also relates to deploying because a lot of these organizations expect a very standardized deployment, put it in the Docker container and send it off. So I think it depends on what level of abstraction you're talking about. So when we're talking cross-service, which is really theoretically cross-domain, yeah, HTTP and JSON is a great way to communicate because everyone can speak that language. But if you're talking machine-to-machine and server-to-server and they're both running Erlang and they're both in the same cluster, the way they communicate with each other, you can drop down to the Erlang fundamentals because that's built in. But I wouldn't want, if I have one Erlang cluster over here and one Erlang cluster over here and talking across them, I would keep that to using rest HTTP and stuff like that because otherwise you're sort of, it seems like you're polluting the domains, I think. I would put in a question for Scott. What's the current state of deploying Elixir to Heroku? I think most of us here probably have some familiarity with some of the very streamlined, like deploying Ruby to Heroku. What's that like for Elixir right now? There's a couple ways you can do it. Sort of the standard way is very like Ruby. You just run in your proc file, set mix phoenix.server, set the build pack, a version number for the version of Elixir you want to run. If it has its downsides, I can't remember all of them, but I do know it's not as optimal as running, like building a distillery release and compiling to the beam and then running like with distillery. But it does have the same ergonomics of I'm just deploying to Heroku and pushing when I need to and Heroku is going to manage my database. Is it going to set a database URL? You don't have to compile your, you don't have to like set your environment variables on build time. You can just set them in Heroku like with the command line so you can run config set via command line and Heroku will reboot your dynos and it works just like it would normally. So for light levels of traffic, for apps that you're iterating on very quickly, it works really well. Did I hear that Heroku changes that you can connect to the observer, use the observer to connect remotely to your nodes? I don't know. Does anybody else know? You haven't been able to in the past. I think I heard this change. Yeah. Sorry, I couldn't hear that. There's a new experimental feature of Heroku. I believe it's called Heroku exactly. You can hook into a box with what you're looking for. Cool. Okay. Cool. And there's like a, there's another, this is something I learned recently. Inside of a proc file for a Phoenix app, you can add a release step, which could probably be a shell script that would build a distributed release. I don't know how exactly that would work, but we use it to run our active migrations. So it'll, the release step will wait, Heroku won't start your main, your other processes until the release step is done. So it will wait to say, okay, I'm gonna run your migrations or run these calculations and then it will, then it'll wait until your database is migrated. So that's how we do like CI. So like our staging servers, we don't have to worry about deployments or data migrations because they just run automatically on release. I think we have time for one more question. Make it a good one. Yeah. Everyone's like, no, I don't wanna ask my question. Everybody's too intimidated to be the last question. Okay, one more. Yeah, I don't know if this is a good question, but recently I saw an article about Google App Engines having released some product where I started to accept the Lixar. So in terms of, thank you for sharing how you guys deploy now. Do you continue to look at other options and, you know, what's out there, maybe we can speak to what other options are out there that you're seriously considering or what would it take to prompt you to change your deployment? Mm-hmm. How about if you look to Giggle Lixar? A little bit. Giggle Lixar, for those that don't know, is like Heroku, Bill for Lixar, but no one seems to have looked at it. It's probably a bad start. Yeah, I mean, I'm always open to better solutions for whatever. You know, the life of a software engineer, we're all very busy. So if I have a solution that's working, I'm not necessarily on the prowl for other solutions, but when they turn up, they turn up. And of course, I would consider better solutions if it is better. I would say that regardless of whether you're building big apps or deploying the Linux or deploying Heroku, like the thing that's most important, the thing that I'm most interested in when it comes to deployment in the Lixar is how I define and maintain and document my automation and in the ways that I am like learning. As I learn things, I look to say, okay, how can I automate it? How can I, how can I, and then like how can I describe and document this stuff so that the rest of my team, number one, the rest of my team can do it and then also the clients that I hand this off to can do and maintain these things. So those are like, regardless of your deployment technology and tool set, like those are the things that I come back to. It's like how, how can I do it? What benefits does it provide me, but also what are the other repercussions of those actions? I think I would consider other options when my situation changes. I chose my solution because I looked at other options and this fits me very well. And if I hired several people if my traffic pattern's changed, that would probably never be considered what I'm doing. I think that we are all looking for a silver bullet for deployment and I think there is no silver bullet. I think it depends on your team's needs, your infrastructure needs, your product needs, if you have crazy security needs, HIPAA compliance, that sort of thing. That would drive out a lot of this and I don't think we're gonna find one blog post that rules them all. So, you know, we should have to do the homework and keep having these conversations. I think that's a good place to wrap up. I'd like to thank Desmond Scott and Ben for this panel.