 Thank you. So my name's John Simone. I work at Heroku. I work on our build-in packaging team. So in that role I get to watch a lot of how our customers use our Git push workflow, what they do with it, how they use our platform to design their development flows. I also work with our internal tools team. In that role I get to look at how Heroku deploys things internally. I'll let you guys speculate on who's better at it. But what I'm going to talk to you about today is designing continuous delivery into your development platform. So there's a lot of options now to deploy out to the cloud. Heroku obviously being one. You could use AWS. You could take AWS and build a layer on top of it the way Netflix has famously done and many other people. Cloud Foundry, there's a lot of other options out there. And those are all very clearly development platforms. What I'm going to talk about is how you can take those ideas and apply them even to a non-cloud deployment. Apply them to what you're doing day to day even if you aren't deploying to the cloud and how you can get closer or get to the continuous delivery utopia that way. So first to do a little level set here. Continuous delivery. So show of hands who's heard of continuous delivery. Hopefully be everyone. Leave your hands up. Who's seen a previous talk on continuous delivery? Put your hand down if you haven't. Who does it at work? Who does some form of continuous delivery? Okay. Who does continuous delivery all the way to their production environment? Full bore. They've got it all. Still a few people. A couple of hands from Netflix, of course. So the point is we all understand what continuous delivery is, but it's hard. There's easy parts and there's mid-difficulty and then there's really hard parts. So you know just to go quickly over the easy bets, you have to version your code. Okay. Hopefully everyone's doing that. I won't ask for a show of hands on that, but if you're not versioning your code, you've got some thinking to do. Controlling your deployments. Making sure that those deployments are only done based on code that was versioned, based on your CI system. Okay. We've all been doing that for some time. Deploying behind feature flags. This is important. So you're deploying a new feature. You're deploying it cold. Right? You're not turning it on when it's deployed. You get the code out there and then you slowly roll it out. You see what happens. You get a chance to exercise the code and you get a chance to roll back the rollout if you need to. That stuff's all not that hard to do. A lot of people are doing that. Then we get into the tricky stuff. Versioning your configuration. It's a lot like versioning your code, but it's got that extra bit where no one would ever log on to a server that was in production and change code. But we feel like it's okay to do that with configuration sometimes, when really it's not. So that's why the configuration versioning, getting that right, it's a few extra steps. Planning your database migrations. This is always the first real hurdle to continuous delivery. It's now, I can do this with my code. I've kind of got that down. I'm even getting the configuration right. But how am I going to make my schema changes? How am I going to do those? How am I going to do them in the right way? And, you know, this is also pretty well understood at this point. You do pre-deployment and post-deployment database migrations. You space them out from your deployments and you can get to a safe place there. Having the ability to roll back deployments. Also very important. And this is rolling back your feature flags. And then even saying, well, even that code behind a feature flag, that was bad. My deployment needs to be repeatable enough. I need to have those artifacts that I can roll that back. And then doing canary deployments. Being able to slowly roll something out across the fleet. Again, I'm moving very quickly through these because I hope these aren't new ideas. These are all things that we we know we need to be doing. These are the hard parts. And then there's this stuff. And this is the real blocker. This is where it really gets tough. When you start talking about versioning your environment, you start talking about stateless and disposable app containers and snowflake servers. So for anyone who hasn't heard the term snowflake server, servers that drift over time. So you've got the configuration, you've got it versioned, you've got it set up. And the day you set it up, it's perfect. You could repeat that setup on that day as many times as you want. But after that, it drifts. After that, things change. And you can't ever recreate that state. And maybe you changed it and you versioned what you changed. You updated the script. But unless you're actually exercising it, that doesn't always work in practice. So this is the hard stuff. This is the stuff that, aside from the few people who had their hands up at the end, this is where we often fall short. It's really easy to tweak our development flow. It's really easy to work within our engineering teams and get to the closest place we can get to with continuous delivery. But very often when you're talking about the production environment, it's run by a different team. And you don't always have control over it. So you make compromises, which means you break these last few things and you don't get to true continuous delivery. You fall short, you have a break just a little bit at the end. Well, the good news is I have the silver bullet for you that's going to solve all this. Very easy. One step. Maybe I don't. I don't have the magic bullet. There is no magic bullet, right? It's hard. It is hard. But what I do want to talk to you about is I want to talk to you about viewing what you do as a development platform. And what I mean by that is it's really easy to look at Heroku, to look at some other cloud platform, to look at what Netflix has built on top of AWS. It's really easy to look at that as a deployment, a development platform, right? That's very clearly a platform. And even if you go before the cloud days, if you go back to Java World and IBM WebSphere, where they sold you the IDE and the build tool and the production environment, that was very clearly a platform. But what I want to talk about is thinking about what a development platform really is and do we all have one today already? So when you think about a development platform and when you think about some of these services, it's how your code gets integrated and deployed. It's how your environments are managed. So what's your staging and test environment look like? How are they set up? How are you getting servers done? How are you deploying to them? It's the container your code runs in and everyone here, your code runs in a container. It might not be an LXC container or it might not be a VMware virtualization, but there's some container. A server with an OS on it is a container. It might not be a great container, but it's a container. And then also your operational data pipeline. So while your code runs, you're getting log data out, you're getting metrics out, what are you doing with that stuff? And the point to all this is everyone here has one of these. Everyone here has a development platform. We just don't always think about it that way. We think about a pipeline that we're building code, we're doing testing, we're integrating that code, we're getting it ready for deployment, but then we don't think about the actual act of deploying. We don't think about those last few steps and really the down to the bare metal pieces. We don't think of those as being part of the same platform. But if you do start thinking about it that way and if you do look at what some of these cloud platforms are doing, even if you're not using them, you start to realize that by thinking about things a little differently, by looking at continuous delivery as a problem that's not just your code, not just your engineering team, but is the full top to bottom stack, it's going to really help you get over some of these hurdles. It's going to help make things easier. So to talk about some of the things that we're talking about here, application versioning. So ensuring your deployments happen via CI. And we talked about this, it's pretty easy to do, we're all mostly doing it. And by doing that you're saying your deployments are 100% repeatable. But then also having a method of rollback built in. So keeping those artifacts. And you don't want this to be a manual thing. You want to keep the artifact around. So I'll use Heroku as an example here. Heroku when you push the code to Heroku, a build happens every time you push. You can't get code to Heroku without having that build happen. That's enforcing, I'll call it a poor man's CI because you're not running tests every time, you do that separately. But it's enforcing some level of CI there. It's also keeping the artifacts around for you. And that's not unique to Heroku. There's plenty of systems out there that do that for you. But by keeping the artifact around, you're making sure that that rollback is built in that you can always go back. And when you talk about the environment, this is the key point here. Disposable containers for code execution. So on AWS, this might look like an AMI plus some setup scripts. On Heroku, it's built in as an LXC container. On Cloud Foundry, it's going to be a virtualization layer that you're running in. And the Cloud Foundry, I don't know exactly what they call them, but Cloud Foundry's version of a Heroku dyno. You have that disposable container. And that has to have certain characteristics to really get you over these last hurdles. And the important thing is these characteristics are built in at that layer, not at your code layer. So you're able to recreate them on deployment and restart. So this goes back to the disposable piece, right? Versioning your environment, saying I'm going to version the OS, I'm going to version every package on the OS. That's fine and good. But really if you start from a known state, if you have a container that can be spun up from a known state, and then everything that's happening there is part of your deployment, that gets you a lot further along. That gets you to the point where you can really call those disposable containers. But then the key to having them be disposable is to actually dispose of them. It's great to say it's disposable, but if you're starting it up and it's living on for days, weeks, months, and potentially having changes made to it, that's not disposable. Deploying code should deploy, you should be able to spin up a container in some new place, deploy to it, and then destroy the old one. And restarting is the same way. Restarting, you start it up in a new place, and then you kill the old one. And by doing that, you're exercising that the container is disposable. And you're exercising that the state in that container is only the state that came in at origin. So whatever was running when you spun up that container, whatever was there in terms of configuration, in terms of code, that's what you want to be there. Everything else is ephemeral. This goes back to the log data and the metric data that I was talking about before. You can't write that out to the file system. That needs to get off of that container some other way. You've got to stream it off elsewhere, because as soon as you write to the file system, and you expect that to be around outside of the scope of your current transaction, now you can't throw that container away anymore. Destroy them often. Take every opportunity, whether it's a restart, whether it's anything, to exercise that. Because if you're not exercising it, it's not actually repeatable. It's going to drift. It's going to get out of sync over time. The last bit is configuration. Of course you have to version it, but talking about building it into the container. I talked about the origin state only of the container. Have a way to have that configuration be in the container at start. So it might get versioned in the code. It might be a configuration service. It might be part of your deployment service, but the point is it's injected at startup. And Heroku does this by environment variable. That's how Cloud Foundry does it. You could do this by writing a file, but whatever's in that container at start is what stays in that container. You're not changing it. If you do change it, it means a deployment. So that means if you're going to change an environment variable, you want to restart that somewhere else in a new container with that new environment variable and then kill your old one. You don't want to be changing things on the fly. An important point to that is if you're not changing things on the fly, it means your feature flagging can't be via your traditional application configuration system. Your feature flagging has to live in the database. It has to live in an API, some service, where you're pulling it in at runtime where you're not relying on a runtime configuration change to update that feature flag. Because you don't want to be doing restarts as you are flagging in features. And when you put this all together, what does it give you? Well, it gives you Roy's last talk was excellent because it gives you a lot of opportunities to get visibility, right? You talk about some of the things Netflix has done because you're controlling the deployment, because you're controlling everything that goes out, now you can build visibility on top of that. You can get visibility of changes that are happening. You can get constant visibility of the current state. And you can look at things as applications, not infrastructure. And that's key because an application is going to be a set of services. It's going to be a database. It's going to be a front end. It's going to be a back end. But if you look at all those things separately and you look at them as each changing independently gets really hard to track what went wrong when something fails. But if it's put together, you have the right platform in place. If you think about it a little differently, now you can get that visibility that you need where you know exactly what happened when you know exactly what went wrong when something breaks. That's about all I had, except for the shameless recruiting plug. Heroku's hiring. It's always Salesforce. All manner of engineers, product managers, technical account managers. If you're interested, jobs.heroku.com or salesforce.com slash dream job. And then I don't know how much time we have, but I think we have time for one or two questions. Oh, okay. So we have plenty of time for questions. Hi. So my company is very new with this whole workflow. And our biggest problem right now isn't quite, it's with zero downtime deployments. And we have it all figured out by the developer or whatnot. But the migration side is what's killing us right now. So how do you have, using like MySQL or some sort of asset-compliant database, how do you actually have zero downtime while migrating schema types? It seems to be that people are saying just use no SQL. But is that really the only answer? Do you use no SQL? No. That's definitely not the only answer. That's a really good question. So a lot of that, it goes into thinking about how you're doing with changes. So a pretty standard, and there's a few ways to do it, but a pretty standard practice is pre- and post-migrations. So that's saying when you plan your deployment, you're going to say, I've got a code deployment, but I've got a database migration that's going to happen before that deployment. I've got one that's going to happen after. And time does not need to be synchronized on those. So you can do that pre-migration a week before the deployment. So that's doing something like adding a new database column when you're going to rename something. And then your code is able to work in both states. So now your old code can work on the schema the way it was. Your new code will expect a new column. And then sometime after the deployment, after you know things are stable, you have a post-migration that's going to clean up that schema. And your code has to, and then later you're going to also clean up the old code. So it's thinking about the way the changes are rolling out in those steps. So you do have to think a little bit about the way the code's written to achieve that, but you definitely can. Dave. Thank you. So you talked about worry about the application, not the infrastructure. What do you do when your application is the infrastructure? So I want to have a feature flag in my database, but it's my database that I'm actually writing the code to deploy, right? It's infrastructure as code. So how do you deal with that when your infrastructure is the code that you're trying to deal with? Yeah, so when I say think about the application rather than the infrastructure, what I mean is how you're logically grouping things. But for what you're talking about, you're talking about updating a feature flag is code because it's all part of the infrastructure. Yeah, so you're to repeat using the system that you're trying to update to run the system that you're trying to update. Things get more interesting in those situations. And yeah, when you run something that is a service that people are deploying apps on like us running Heroku, you do get into these kind of chicken and egg situations. And sometimes you have to either do things that are maybe not 100% continuous delivery or you have to kind of bend the rules a little bit. But yeah, there's always going to be exceptions, I guess is the point. I understand you correctly, you're suggesting to build the configuration into the binaries and make them immutable. Is that correct? Well, making it immutable, yes, but not necessarily building it into the binary. So whatever configuration is present at the startup, and you know, startup is it's a little bit of a gray area, right? Your app's going to have a clean container, some type of setup's going to happen to get your app onto that container, which is usually post-build, and then the app's going to come up. And as part of that, between setting up the container and the app coming up, you're going to inject in configuration. And that configuration it could be writing environment variables. Because on a long-lived server you would never write environment variables, but in this type of setup you might be able to. It could be writing a file to the file system, but the point is it doesn't change. So the immutability comes more from the way you think about it, and the way you, what you do when a change happens, meaning you're going to restart in a new place when a change to that configuration happens, rather than saying, well, I wrote it in a certain format so I know it's immutable. Because from what I read in the literature you're supposed to keep the binaries throughout the environments, like from DevTest, etc., and just change the configuration. Right. Just clarify that, thanks. Yeah, yeah, absolutely. So, you know, if you look at a use Heroku as an example again, you know, configuration is stored in a service that's associated with a deployment and associated with an application, but it's separate from your binary. So you could take that same binary and deploy it with different configuration. So it's built in packaging. So I was wondering what your response would be to the challenge of multiple different packaging formats in the same container. Is it worth, you know, needing to resolve dependencies between them, all that kind of stuff. What do you mean by different packaging formats? Or maybe gems and RPMs and different kinds of packaging formats. And really your response to the question of if a certain platform you have the ability to drive discipline to a single package type, is it worth it? Okay. Yeah, so driving to a single package type is a lot of, that's how Heroku works. So that's how we're able to support different languages is we don't, and when we say package type, we're kind of taking a step up, you know, a different level of abstraction than the languages. So we're not saying everything's a jam or everything's a war file. We're basically saying everything gets built as what we call a slug. And a slug is a pretty simple format. It's really just an archive of the files that are needed to run your application. Then we define a proc file. And the proc file is just a single line that says how do I launch this application. And the application needs to be able to launch as just a Linux command line process. It needs to be something you can just launch from the command line. And then anything you need to launch that, whether it be the Java binary, the Ruby binary, that all gets brought in during the build process. Yes. So everything's already packaged in there. So during the build process you're running your dependency management system, whether that be Bundler or Maven or whatever it is. And then all those dependencies are getting packaged into the slug, including the language runtime if you need one. So if you need Ruby, if you need Java, that's also packaged in the slug. So that's truly a self-executable unit. And then that sits on top of the Bayer container. And then that's what launches the app. Did that answer the question? Yeah. And yeah, you could take it that far. I mean, you could have something, let's say if you're deploying right on AWS, I mean you could be dynamically building AMIs if you wanted to. And that could be your package, right? Yeah, exactly. Or you could have a couple standardized AMIs and then you have some package you're layering on top of that. It's whatever ends up working for your system. But the point is, yeah, all that becomes part of your platform more than part of your app. I think it's intrinsic to a system that you want to continuously deliver on, right? Because you need to have your environment be controlled. You need to be able to version that environment. You need to be able to start at some known state and then layer everything that you want on top of it as part of the build and deployment process. So that could be built in in something like Heroku or it could be something you roll your own on your own hardware. But I think you do need that. Going once? Thank you, John. Thank you.