 Basically, my goals are these. I want you to understand releases, like what their purposes and what their benefits are. Be able to leave here knowing how to use them with your project. It's really simple to get started a couple of minutes of your time, and you can start using them today basically. I'm also going to cover how to use conform. It's a configuration management library. I'll explain a little bit more what it does, but its point is to give you a little bit richer, more powerful configuration and production. A quick overview of what I'm going to be talking about is a little bit of an OTP refresher, just to make sure we're all on the same page about what applications are, how they're structured, how do we deploy Elixir apps without releases, what are releases, how do we use them, and then configuration management with and without conform. OTP applications, really the essence of them is that they have a well-defined structure and lifecycle, so that's the behaviors that we're familiar with, like GenServer, GenEvent. The lifecycle is the start and stop, code changes, stuff like that. They have explicit dependencies, so in your mix.exe, you're defining exactly what your application depends on, and then there's also useful metadata about those applications. So you have a name, a version, what modules are exported, what functions are exported from those modules, what applications are included in yours. This is a quick visualization. I borrowed this off of Google. It's just an overview of the React Core application, just a subset of it. You're all familiar with this top-level app, it's responsible for bootstrapping the supervision tree of your application, right? So you have your application behavior, your core supervisor, and then that's starting here, like color-coded, purple are the supervisors, yellow are your workers. So you've got supervisors that start their children, supervisors, and children workers, and so on down the tree. And this has that well-defined lifecycle that I was talking about when you start your application. It starts its core supervisor, which starts everything else below it. And then when you stop it, it's the same thing. It sends all the stop commands down the tree. If you've got a branch that dies somewhere in there, your supervisor is restarting that. That's stuff that we're kind of familiar with, working with Elixir and Erlang applications, but it's kind of important just to keep that in mind when we're discussing releases. Another thing to keep in mind is the difference between process-based applications and library applications. They're effectively the same thing from Erlang's point of view, as far as defining them, right? You're still defining the name, the version. They're explicit as a dependency in your application. The only difference between process-based and library applications is the lifecycle. Library applications are just exporting modules with functions that you call, but there's no processes within them. Okay, so, release management without releases. This is kind of probably what most of you are doing today. I think probably a few of you may be using releases, I'm not sure, but the idea is that on your production box, you have to install Elixir and Erlang, and potentially other dependencies, such as OpenSSL, if you're using the crypto application. You have to find a way to deploy your code, your application, to the target system. So, you're either tearing up your project and deploying it, get cloning it, which would require get on your server, that kind of thing. You're configuring it once you've got it in production and then running it, or building it with your correct environment set so that your configuration is correct, and then starting it up with Mixerun. I've got a note here about the fact that this can be automated. There's Docker files available now that I've seen. There's a Heroku build pack. I know somebody's doing some work with Ansible, that kind of thing. So, this is obviously less than ideal because you have to have development dependencies in production. You're doing builds in production, that's not great. If you have to have multiple applications on one server and you have varying dependencies on versions of Erlang and Elixir, that's difficult if possible or if not even possible with this approach. There's no out-of-the-box way to automatically restart your application if it fails. You're gonna have to use something like Upstart or write your own scripts around this. It's just not great. It's nice to have a guideline for how to approach these problems. You can't use hot upgrades and downgrades. You can reload modules manually, but obviously that doesn't work very well. So, you're gonna end up doing rolling resorts, which might be a good fit anyways, but it's nice to have the option available. This also does not work on some platforms, particularly embedded, where there might be read-only file systems. You can't do that with this approach at all. So, what does better look like? Your applications wanna be self-contained with all their dependencies in this package. You don't wanna have to have any tooling installed in production, ideally. That's kind of something everybody's familiar with doing already, but better is trying to get away from that. This helps you run things side-by-side and so on. You also want a way to manage your application's lifecycle out-of-the-box. You don't wanna have to get another library or a set of tools to manage this, if possible. You wanna be able to use hot upgrades or downgrades. You wanna be able to cross-compile to your target system. This is more applicable maybe if you're deploying it to like Raspberry Pi, Arduino, something like that. You wanna be able to achieve this somehow without having to build on the target system, if that's even possible. You wanna have easy deployment. Drop the package on a server, extract it, run it. Simple as that. And you also want reproducible artifacts from your builds. So if you build a project, you wanna be able to take that on in many target systems of the same architecture and easily deploy those. The problem with running the mixed-run approach is when you copy everything to the server, there might be minor differences. If you're not very careful about how you version your dependencies, you might get different versions of your dependencies. It's not great. So, butter is effectively OTP releases, right? They're self-contained. They have all their dependencies. They have the early runtime system included. That's optional, but by default, that's the case. They have built-in lifecycle management. So that's starting, stopping, restarting, upgrading, downgrading, all that's built-in. There's also health monitoring. So if your application actually crashes, if you use an application called HART, which is baked into releases, this will monitor the heartbeat of the early runtime system hosting your application and restart it if it needs to. There's hot upgrades and downgrades baked in, as I mentioned, easy cross-compilation. All you have to do is reference a compiled version of the early runtime system for your target system, which is just packages for most target systems that you want to cross-compile for already. You just download those, reference that in your configuration, and boom, ready to go. Easy deployment. It comes as a tarball, so you just extract on your target system and basically bends app and run. Also easily reproduced because the artifact is a tarball that has all of its explicit version dependencies. Once you've built this release, it never changes. It's immutable. So what exactly is release? It's a set of versioned OTP applications. So these aren't rough dependency versions. This is like explicit exact versions of all your applications that your app depends on. So this is not just your application and its direct dependencies, but also all transient dependencies, including Erlang and Elixir applications. It's got the Erlang runtime system included. Contains release metadata on how to start your application, meaning in what order applications are started up and shut down. There's an explicit configuration mechanism that's the sysconfig, which is what config.exs ultimately is compiled into. There's a VM.args file for providing arguments to Emulator. There are scripts for managing the release. This is how you get the start, stop, restart, upgrade, downgrade functionality. And then ultimately this is all rolled up into a tarball. So this is a brief overview of what the release structure looks like. At the top level you just have your application. The bin directory includes the boot scripts for how to start the app, but these are basically really thin layers that reference the bin directory under releases version. There will be the actual boot script for that version in there. The Erts library, if you're including it, will be referenced here with the explicit version. The lib directory contains all the applications, all the beam files for those, including anything stored in priv. So if you're deploying a release of like a Phoenix application, all your assets and stuff will be included in there. The releases folder is all release specific information. So this is where your configuration is stored. The relup files, appups, sysconfig, VM.args, the Erlang boot script, and no tool stuff. This is all mostly hidden from view for you. If you're consuming releases, it's more important for the release handler when it's unpacking a release and installing it. This is where it grabs a lot of its information from. And ultimately, how do you use releases? I wrote XRM when I was interested in trying to use releases with elixir applications. XRM is built on relics, which is basically the equivalent tool for Erlang apps. So I'm extending that by pulling relics in as a library and then writing some elixir code over the top. So it provides mixed tests, like mixed releases, basically the primary test you'll use with XRM. And there's some cleanup scripts and stuff like that as well. It provides automatic appup generation. So this is particular to hot upgrades and dowel grades. But this is something that wasn't baked into relics and there's somewhat of a reason for that because appups are particular to your application how things are upgraded. But there's like a 90% use case where the appups that are generated by XRM will suffice. If you're in a more complex situation then you have to reference the existing Erlang documentation today called the appup cookbook which tells you how to approach modifying appups generated by XRM in order to handle upgrades or downgrades more precisely. There's also a plugin system. So right now there's a couple different hooks that are executed. The plugins are exposed as a behavior and one of the ones I can think about the top of my head was written by Steven Palin who did RPM plugin that generates an RPM package from a release. And there also provides some intelligent defaults that aren't provided out of the box with relics. So there's a default relics config that is generated based on information I'm able to pull from MixCXS and some other data. Also provides a default VM.args and so on. So some of the things that this provides is like setting the cookie and the name for the node so that when you're deploying this you can just remote console to it without having to configure anything. So overview of what the release process really looks like. What is XRM doing under the covers, right? It's reading the configuration for the release. This is the relics.config file that's either generated or provided by the user. It's then generating that relics config that's merged with the user one. The sysconfig file that's pulled by readingconfig.exe based on the environment that was provided. Generating a VM.args file so on. It's then running this before release hook which is part of the plugin system. So this is a place where you can provide some of your own code to prep the release as needed. You don't have to provide a plugin it's just something that's there. There's then some discovery that's performed. This is delegated to the relics library. So at this point we're calling into relics. So it looks up all the releases that your application requires. It also takes a look for previous releases that are available if they exist. It's resolving all those dependencies like where are those applications. Just based on the constraints provided can it locate them. It's then taking all that information and building the release package. And that is dumped to your project root slash rel. What's output there is the extracted version of what's in the tar ball. And then under the releases like rel my app releases folder where the version is. That's where that tar ball gets dumped. After the packages run it runs the after release hooks which allows you to kind of do some post modification of the release package if you want. I'm using this with conform to do some extra work after the fact. And then the release package is repackaged the tar ball that was generated and then it runs after package hooks. So you could potentially write a plugin to do deployment for you if you wanted. X-ROM is pretty agnostic about how you deploy release. All it really cares about is generating that release package for you. So real quick let's take a look at what this looks like, right? So I have cloned like the Elixir Phoenix chat demo just because it's a good example of how we can do the hot upgrades and downgrades but I haven't really made any modifications to it really. So when you're going to build your release you've got your code ready to go. You basically just do this. In this case I want to build with the prod configuration. You can see it'll compile your app if it hasn't been compiled. And then it kind of summarizes what it's doing for you so you can monitor it and tells you where you're ready to go. Now you can run your app directly from the Ralph folder but in general I'd recommend not doing that outside of development. So there's a mixed release dev flag where your application beams are simulinked into the release which allows you to effectively test your release while you're doing code changes. In this case I'm going to just quickly deploy to my temp folder. So we just have our tar ball here and this is the structure that I kind of showed you before. Now at this point all we have to do to run it is start it and we have our application up, right? So one of the things built into the chat example is the system pings you periodically. So let's say we want to make some code changes to this application. In this case I've kind of pre-prepared all this just to tell you or show you what I've done. I changed the version of the application like incremented the version number and then changed the system message that it spits out. Now the thing about hot upgrades and downgrades and the reason why it's really appealing is if you have a bunch of connections to a server they're ongoing and you want to do an upgrade you obviously don't want to drop those connections, right? So in this case we're going to generate a release containing our updated code and part of what this process does is when X-Arm is running and it sees that there's an existing release and that the version number is a new one it will assume that it needs to generate an app up for you. It doesn't have to be used if you're doing rolling releases but by default it's just going to say okay we want to do an upgrade. So where you want to dump these upgraded packages is in the releases folder of your deployment system make a new directory for the new version and then so now that we have our package ready to be upgraded our system is still running of course. If we check here it's still pinging us we just tell our system to upgrade itself, right? So we'll unpack that package, unpack the configuration and then install the new version and if we go back and look it's now telling us a different message. This is really awesome, right? If you've got a bunch of people in here all chatting you don't want to ruin their day by killing the server. So everybody can carry on as if nothing has changed but you've made changes on your backend and any new requests that come in like let's say you change the styles of that page like you want to make the page blue or something new requests will get all the new styles and be connected as normal. The old clients will still be consuming the old front end but it's not really any way to obviously hot upgrade that at least that I know of. That would be a cool feature releases if we could do that. If however that upgrade didn't go well let's say you're like oh man what have we done? I want to go back. You can tell it to downgrade to the previous version and then it's going to go back to doing what it was before. So this is kind of the nice feature of releases mainly upgrades however it's not the only feature. Obviously the primary benefit of releases is not the hot upgrades and downgrades it's a really cool thing. The primary thing is it makes your deploying to production so much easier and everything can now be run side by side completely isolated. You don't have interdependencies between applications if you upgrade or lying on your product server and not potentially breaking something because everything has its own version of Earth's right there with it. So configuration of your releases, right? Right now the way configuration works if you're running with mixed run config.exe allows you to put function calls and stuff in your configuration but when a release is built your config.exe is evaluated at release build time. So if you're expecting this to be dynamic on your product server it's not going to be. It's going to take whatever, let's say you're reading environment variables right? It's going to be reading them from your build machine not your product machine. So this is sufficient for the base use case but it's not great for people that aren't programmers. Let's say you have an ops team they aren't familiar with Elixir or Erlang. They're going to have a hard time configuring this application because they won't know the syntax. The thing to remember especially with config.exe is that this is compiled to a sysconfig file. Sysconfig is in Erlang terms not even Elixir terms so it's something that takes a little bit of adjusting to. Ideally we want our ops teams to not have to think about programming language syntax and like at all. So we don't have dynamic configuration. We have a configuration based on programming language. It's not great. Sure. So that's why I wrote conform. I was inspired by Cuddlefish which is a library written by React for React or basho for React. It allows you to have more of a knit style configuration for your ops teams or even yourself. You basically define a schema and then the end user works with this a knit style configuration instead of all the complexity behind your real configuration. So you can have a very complicated config and it can be boiled down to something very simple and easy to consume. So the schema itself has the concept of mappings, transforms and validators. Because that's kind of three steps of configuration when you're transforming the static and knit file or knit style config to the sysconfig consumed by the system. Schemas can be extended. So if your dependencies have conformed schemas your application can extend those schemas and then expose the configuration to users of your application. One of the nice benefits is that you can flip on or off which settings are displayed to users when this comp file is generated. So if you want to have advanced knobs you can hide those behind your schema. Allow users to know that those settings exist to tweak those things but not dump it into your default configuration file. There's also a mechanism for providing documentation referencing other settings. Let's say you have a couple different settings that all share the same documentation. Your core default setting could have all the documentation and everything else can reference back to that. And then this configuration is actually merged with config.exe. So this gives you a path for kind of moving to using conform without having to do it all at once. So anything that's missing when generated by conform is right from config.exe and then that's all merged together. Conform takes precedence, it's merged over the top of config.exe but allows you to have those defaults still there. So what are mappings? They basically define how to map a user-facing setting to your backend setting. And I'll show you shortly here a quick example like why you might want to do that. It tells you what the data type of the setting is. So there's a way to say that this value is an atom or it's a list of atoms or a list of lists of atoms and so on. You can have complex types as well. It also defines default values. If you have ones, they're like same defaults. You don't have to provide one. Also defines validation rules. So this uses the validators concept that I'll touch on here. But basically this allows you to say when somebody provides a value for this setting, run this validator against it and make sure that it's legit before we actually start up the application. Also yes, in these other settings, like whether the setting is hidden from view by default, whether it's commented out by default. And then also this is where you define your documentation. Excuse me. What are transforms? These are simple functions. They take a single parameter which is a reference to the current configuration state and allows you to query that configuration state so you can combine settings, modify existing ones and so on. It provides sort of a query syntax for this based on wildcards. It's really pretty simple stuff that's covering more detail in the documentation. I'm not gonna go over that right now. It can execute any elixir code. So this would be an ideal place to do things like read from the environments, read from a file, so on. You can get machine specific, like on the target system, information about it when this configuration is evaluated. Which when it can use in combination with releases happens when it's started, upgraded, downgraded. Yeah, those are three times. And then you can package these transforms into modules that implement a behavior that's exposed. This way you can keep your schema file pretty simple with just mappings and then reference these modules that implement the callback. Validators are much the same way. In this case it receives the value that needs to be validated and then optional arguments. Conform provides a range validator kind of an example but you could use it to validate ranges that takes the range to be validated against as an argument. Args to a validator passes an array as a second parameter or a list. And then validators need to return one of three things, right? There's gonna be cases where you might wanna warn that the setting provided is like it's valid but maybe a little extreme and you want to warn the user about that. So warnings are treated as okay but a message will be printed when the configuration is evaluated. And then you return okay or an error if maybe. If an error occurs during validation the configuration valuation will stop, that message will be printed and the user will have to fix it before they can continue. Interesting note about that actually. When user releases, if you're doing an upgrade or a downgrade and the configuration is evaluated and something is invalid or there's an error, the upgrade or downgrade won't be executed. So you won't be like in this weird state where it's like half, you've done the upgrade and now it's trying to do the configuration. Configuration is evaluated before the upgrade is installed. So a quick overview of how form works. There's a .conf file that's the user facing.conf. It parses that and then maps the settings from their source to their target and then runs a validation against those mappings. And then it runs and executes all the transforms that are present and then merges that parsed and mapped transform config over the top of config.exe. And then all that is output to sysconfig. So real quick, this is what the configuration looks like. This is a really simple one. You can have lists of things like let's say this is ports instead of port, you can do something like this. And there's a lot of complex settings that you can use with this. I'm not gonna go into detail on those right now but the configuration covers a lot of that. You can also reference the tests and conform. There's a whole bunch of different scenarios that I test against just for examples. So this is nice and clean, right? We've got just a few settings. There's actually some that are hidden here but it's really readable and easy to get into. These documentation comments are actually generated from the schema. These aren't things that I manually typed in. So schema.exe is the schema file. It looks just like an Elixir data structure basically. We've just got keyword list and the top level settings are extends, import mappings, transforms and validators. Extends and import I'm not gonna go into right now but it's how you extend a schema or import applications to be used as part of transforms or validators in this schema. One of the things to keep in mind with releases is that you might reference an application here that's not explicitly defined in your schema or in your application dependencies. You can do that here and when conform runs it will make sure that that application code is packaged along with the schema. And you can see here like, let's just look at this chat.url.host setting. Chat.url.host is what the user is gonna see. And then we've got some documentation and this two keyboard is where that setting is gonna go. And you can think of this as like a key path, right? So your output of this setting is gonna be a keyword list of Elixir.chat.inpoint URL and then host and then the value. This is not very user-friendly, right? It kind of exposes the internals of your application. You don't want a module reference in your user-facing configuration. Like what happens if you wanna change which module handles this end point or something like that. Now your users have to change their configuration. It's just not ideal. So you can expose a more simple setting that reflects what you're trying to do and then behind the scenes do what you need to do to hide that complexity. We define the data type for setting, same default if we have one. Example of like using hidden here is this chat.url.root is hidden and you can see that it's not in here. I don't see this good example. You can put module names in here, integer values, I don't know if I have an example of like a list in here. Like I was saying it's not necessarily important to cover what the possibilities are as far as data types and so on go. It's more important that you know that it's possible to do transforms and validate your provided settings in the .conf. I'm not going to show you the transforms or the validators right now. Just it's sufficient to know that they're effectively just functions and you define them as key value pairs in those areas of the schema. Go ahead, oh fine, I'm sorry. I think that's all I really have for now and thanks everybody.