 Hello? Everyone can hear me OK? Perfect, thanks. So we're going to talk about the deployment API that we wrote. My name is Chris Mays. And then Mike and Nolan later is going to be doing a quick demo on showing actually some of the code or the actual configuration of how this all works. And just to give you a little bit of background on how we are using all this stuff here technologies is that we started on the Docker DCOS or I should say Apache Mesos and Marathon Bandwagon about three years ago. And we both work on the operations team. We're both tool developers, though, on the operations team. And we noticed that if we had 12 dev teams that handed off software to us, they'd all handed us completely differently. And so we decided to call us around Docker and make sure everyone started handing us off Docker images. And our lives became a lot simpler after that. We ran the open source for about two years. And then we kind of realized we're growing too big. We need the enterprise support. We need some of the enterprise features like the LDAP stuff. And we started talking with DCOS. And then that's where we're at today. So the problem statement then of what we ran into is we wanted to become very, very self-service for our development teams. We were becoming the bottleneck for onboarding new applications because we would typically set up almost everything for them. We would set up their automation, set up their build flows and stuff like that. And when you have only a couple of people doing that for a lot of teams, you can quickly get overwhelmed. So we wanted to be as self-service as possible. The API, while relatively simple actually, there is a learning curve when you actually start wanting to do some standardizations. And where we wanted to have some of those standardizations on is we're in the Docker run setting. So like for instance, we use the Docker log driver. And in order for that to work, and when you're passing stuff via Marathon, you have to set up certain parameters perfectly in there to get that all to happen. And we didn't quite trust all the developers in our company to do that correctly. So we wanted to be able to inject that in the middle as they're doing their deployment. Same thing with labels. For our monitoring stuff to work, we needed Docker labels to be injected correctly. So we're doing the same thing for that. We also ran into some issues with Marathon LB, which might be going away obviously with Edge LB that is coming out, but it exposes some pretty low level HAProxy stuff. And if you set that up incorrectly, or have, I reference here, the missing the backslash R or backslash. And if you do that stuff wrong, HAProxy config that gets outputted the next time it refreshes is incorrect. And then your HAProxy never updates again, which is always fun until you fix the problem. So the solution for us was to proxy the DCOS API. So we put an API in front of it that now the dev team is hitting. What it does is it has an extra layer control. It allows the ops team to basically control the parameters that the software is getting run under. The other part that the other advantage here is that it gives us pretty easy change management. And you'll see why in a little bit, why this makes it easier. The API or the DCOS deployment API still utilizes DCOS security. So the username and password that gets passed to us, where we just send that stuff directly onto DCOS. So whatever access controls have been set up, continue to be set up perfectly for them in this situation. All the parameters also that we are storing. So all the different stuff, how many instances, CPU, memory, all that stuff that you would need for your app in the marathon JSON are stored in Git. So the nice thing is that then is that the number one, you get all the advantages of an SEM. You get password protection, you know times, you know when things change, who changed them. And for instance, we use Garrett inside our company. So we could code reviews relatively easy on that stuff. And we kind of get away from having a lot of just typo errors for, you know, during configuration. So now getting to the files, Mike is gonna show you this in a little more detail, but just a quick primer on it. The system we use supports both properties or YAML files. It's hierarchical, so you'll see that there's three file names here. The application as global is one that you would potentially set across all your deployments. So if you want to make sure something is set across all deployments, you'd put it in that file. Then hello world-deploy, so this would be for the hello world app. That stuff in that file would govern stuff across that app only. And then there's another layer of hierarchy which would be the hello world-deploy-dev. So then that would override things for the dev deployment. So if something were set higher in hierarchy, the lower you go, it would override what's set there. The deployment API is relatively simple. It has three endpoints. There's a deploy which actually goes and grabs the configs and creates the marathon JSON and then talks to DCOS. So the idea is you do the deployment and then you would, in your automation, call the status over and over to make sure it's on some cadence every whatever 15 seconds to make sure the deployment actually succeeded. Because if it's doing a rolling deploy, for instance, replacing instances, it could take five minutes to finally get that deployment to work. And then there's a config endpoint for mainly kind of debugging and testing. So it returns what the marathon JSON would be for the input given. We have open sourced this. So as of yesterday, we finally got this work through our company's legal and all those processes. So it is open sourced at this address. And then if anybody needs to contact us for whatever reason, here's our email addresses. And then we're gonna just get to the demo now so Micah has enough time. You guys hear me? All right, so we're gonna run a live demo. Hopefully the live demo gods are nice to us today. We're running this in Amazon, demo DCS in Amazon. The first thing I'm gonna do is install the deployment API service, show you the user interface for it. Then I'm going to show how we build configuration files, those three files that Chris was talking about. We build those YAML files in the Git repo, in the deployments Git repo. And then finally I'm gonna show a Jenkins declarative pipeline running, which builds a service and deploys it to DCS. So the first thing we're gonna do is install deployment API service to DCS. Unfortunately we're doing it with the marathon JSON directly right now. We're still waiting for the final legal approval to get the image on Docker Hub. But it's open sourced, it's just our lawyers just need to finally approve the terms of service. So what our ideal scenario for this is to have it as a DCS universe app that you install with one click. We have that working in our environment. We just need to get that final approval for the image on Docker Hub and get that pull request for the Mesosphere universe approved. So I'm thinking in the next couple of weeks we'll have it in the universe. And if anybody is really interested in getting a notification when it does get into the universe email me and I'll let you know. So for this demo I'm installing it out of a image I've built and pushed to our local Docker registry. And this is built out of the public repo. And there's only two configuration items that go into the deployment. And the first one is your Git URI. And that is the SSH URI for your Git repo. We have, for this demo we've got it in GitHub. This is a public repository. So you can go in and see what this looks like. And then the second configuration item is the SSH private key to access that Git repo. So in GitHub this is you create a deploy key for this. You generate your public and private key. Add the public key to GitHub. And then you take your private key and you use a DCS secret for this. So if you're not familiar with that all you would do is take the chunk of text your RSA private key and paste it into this secret. And that's it, so let's deploy this service. Nope, as this comes up, this is the source for the GitHub for our open source project. It's under Apache so it should be no issue to use it in your environment. The documentation's relatively extensive for documenting all the different scenarios that you can have with your deployments. Looks like it came up. Perfect. Okay, so this is the front page for deployment API. It's generated out of Swagger. If you're not familiar with Swagger it's a auto generating tool which generates this nice page for interacting with your REST APIs. So you wouldn't use this as part of your deployments but it's very useful when you're testing and debugging and seeing what's available in the API. So it's self-documenting. It will show you all the status codes that the service can return, descriptions of all of those. It shows you the model of exactly what you'd post and the model of what it's going to return. It gives you a lot of visibility into exactly what the REST API looks like. So these are the three endpoints that Chris was describing where you have your deploy endpoint, your status endpoint and your config endpoint. Deploy is for deploying your service. Status is what you would call on a regular basis as you're deploying every 15 seconds checking if my deployment done, checking if my deployment done. And then config, like Chris said, is for transforming the YAML model into the marathon config. And I'll show you that in a minute so that makes more sense. So let's jump into the deployments repo. So this is a public repo. You can go in and look at this if you want. There's nothing secret about our deployments because anything secret we want to use DCOS secrets for anyway. So you can see there's the global configuration file which has things we want to set at the global level. And like Chris said, there's the Docker parameters for the log driver. Maybe you want to difference this log format. The syslog server you could also inject into here. And these are the parameters you would want to set for every deployment you do. All of this is overriding but so typically we don't have a ton of things in this file but it's very useful if you want to set things that are global across all your deployments. So in this example, let's say we have an app called Cool Service. It's like a Hello World type service for deploying. So this is the application config for Cool Service. This would apply to every deployment of Cool Service. And under the marathon namespace, this should look pretty familiar to you. It's very similar to the marathon JSON format. And even though JSON is completely derivable from YAML the YAML model isn't a one-to-one conversion. It's a custom YAML model which gives us some easier controls over certain things that are difficult in the JSON format. Like constraints are a little more straightforward and easier with the YAML. If you're familiar with the way DCOS constraints work it's like an array of strings and you kind of have to go back to the documentation to figure out which string corresponds to which field. But in this, the fields are all labeled. So we get more control over that. And so that's why we have the custom YAML model. So the three-tier hierarchy, let me show you the development or this environment config as well if it loads. Okay, there we go. So this would be the properties that only apply to the dev deployment for cool service. So the way the three-tier hierarchy works is that environment can override application settings and global settings and application can override global settings. So you have complete control over what you inherit and what you build. So the easiest kind of show what this all looks like. So the way it works is it will merge, when you deploy, it will merge this file. Like let's say we're deploying cool service to dev. It will combine this file, this file, and the global file. So this is what, so if we look at the config endpoint and do dev, and this is the nice thing about Swagger, it gives you the exact curl command that you would need to run this. It gives you that for all of the endpoints. It's a very nice thing for debugging. But this should look very familiar to you. This is the marathon JSON format. So this is all three of those files combined and then transformed into the marathon JSON format. So the only tricky thing that you really have to deal with is how do you merge lists of things? Because if you have handle that is by these, check, check. All right. Where was I? Lists. So the way this works is we have these additional namespaces for global application and environment properties. So you can see under the global namespace, you can set a list of Docker parameters. And that will get merged with any of the other application Docker parameters or environment Docker parameters in the other namespaces. So the easiest way to show this is with the labels. So this application, cool service application, has two labels defined. DCS service port index and DCS service scheme. And the dev profile has one label defined under the environment namespace, environment labels namespace. And it's a specific to just the dev deployment. So when we go back to this, we'll see all three of these labels got merged. These two are from the application namespace and these are from, this is from the environment namespace. And if we went to prod, prod instead, you would see it has a different DCS service name, which comes from this file, the prod file. Overriding individual properties is very simple. So for example, the application defines the memory here as one gig. And you'll see if we look at the dev profile for it, set to one gig. But for production, maybe we want the mem to be a gig and a half. And so we define it to be a gig and a half at the prod level or the environment level and we get that. But we never had to define that 1024 in the dev file. It got inherited from the application configuration. So it's pretty easy to override anything in the marathon namespace. And the other namespaces, the global application environment namespaces are simply combined. The, there is a way to, as I mentioned, to ignore parameters in lists above you. So for example, in this prod deployment, we say application and this sent to nothing. Which means it will not inherit from this list. So if we look at the dev deployment, oh geez, I'm trying to type with one hand here. You'll see it has the reference to this secret in the environment. We define the secret object and then we have the reference under the M here. But if we looked at the prod deployment, you would see the secret was still defined but it doesn't get put into the environment. So that's how you control exactly what goes into your final deployment. There's one more feature that's easier to see in the documentation. And this is something that Chris mentioned earlier. So we have the idea of label templates. So this makes it easy to define complex strings as templated options. So we have two built in specifically to deal with these strange HA proxy options that you need to add as labels. So it injects the argument that you pass in to here. And that's for health checks on a host name. And then this is for increasing the time out for HA proxy. And of course you can define your own custom label templates without changing the code. And it's described in the documentation. You would just add a YAML file to the deployment's repo to find your template and you have complete control over what that template looks like. And labels are used for lots of different things in Docker orchestration. And we use it for monitoring and for very, I know Datadog uses it. I know HA proxy uses it. Lots of things use it. So let's show this in action, I think. So here's the cool service code. It's public. It's just a simple hello world application. We're going to be doing this demo in Jenkins. But since it's a REST API, it should be clear that Jenkins is not really required for any of this. We've been using Electric Flow for many months to do this. I know most CI tools support calling REST APIs. Travis CI, I think you can just do it with curl in the trusty image. But it's not Jenkins specific. But I will show you the Jenkins file. Because we have examples in the documentation of exactly how to write these methods so your dev teams can use it. So I import this library. This is in the documentation. I'm not gonna get too far into this. But all this library does is provide this function here called deployment API. And what this function does is calls the deploy endpoint passing in your app name and environment and then the docker image. And then it blocks while it calls the status endpoint waiting for a successful deployment. So this Jenkins file is not too complicated otherwise all we're doing is testing and building a docker image and doing the deployment. Let's pull up this Jenkins. So I can, I'm gonna start this pipeline up. It'll run, it takes about 30 seconds for the executor to spin up in mesos here. It'll show up here. But while that runs, I kinda wanna show you exactly what you would put into the deploy when you're deploying. So there's five parameters that go into an deployment and these are your DCOS username, your DCOS password. This is your, this is the same image string that would go into your marathon JSON so you can deploy from docker hub or a private repo anyway. So it would be the same thing that you would pass into the marathon JSON. So this is running. This should not take too much longer. So this is the Jenkins library that I'm calling just so you have an idea of what exactly this does. And so it just calls deploy, passing in the same payload that I was just showing you and then it blocks until deploy or failed. Be almost done. Okay, so once it gets to that dev deployment section we see it already starting up here in DCOS. So it's deploying and then once it's, once the deployment API function determines that it's healthy after calling the status endpoint over and over, let me show you what that looks like if you're curious. So this is an example of how you post to the deploy endpoint. This is what it returns, it returns your app ID. And then when you post to the status endpoint you pass in the same information, your username, password, app name and app environment. You don't pass in the image obviously because you're looking for the status and it returns you something similar to the marathon deployments API. So this is what it looks like when it's deploying and this is what it looks like when it's finally deployed. So all my Jenkins function does is look for this state of deployed and that these tasks running equals tasks healthy. Right, so the service came up fine. This Jenkins pipeline requires me to click proceed to go forward but what you'd really wanna do is have tests here, you'd wanna have integration tests, you'd wanna have load testing, whatever you'd wanna do but you'd be testing against the real deployment and you'd know it was deployed successfully if it got out of this stage and you'd know that you were testing against the real service here. So if I click proceed here, come on Jenkins, we see it starting up here and this has all the configurations from that we've seen from the YML. So that's pretty much it. It's pretty straightforward. I mean, does anybody have any questions? Yeah, of DCOS, yeah we do, yeah. Yeah, so we do, we actually try to create namespaces for each group that we're supporting so as the operations team of course we have full access so we keep ourselves as super user and then we usually under root we'll create like the name of the group and then dev and then name of the group dev slash prod and so they'll have full access on dev to be able to do what they want and then on the prod they'll only have read access. It's typically how we handle it. Yeah, we figured this might be a problem that most people or a lot of people are having. Yes, correct, yeah, it's a spring boot app. No, we don't have rollbacks in there right now. That's a good question. Oh, do it, marathon labels. The marathon labels are used to configure a lot of stuff so like Datadog for instance turns marathon labels into Datadog tags so that's in order for our monitoring to work. Oh, all those YAML files are stored and get. Yeah, so all the configurations are, so instead of, it's basically configuration as code in a way. So the configuration files all get stored there so that we gain the benefits of an SCM in front of those things. Correct, yeah, correct. So yeah, that's relatively, yeah, it's pretty close. So with all the different configuration files we'll generally get somebody either asking for it or we haven't, right now today, we don't have the developers have full access to those files so it's the operations guys changing them. The reason, when we use Garrett in front of that so that we can get, somebody else gives, I don't know if Garrett's that well known, you basically have to plus to it which means someone else is taking a look at it and making sure you're filling it in correctly. We typically have ops guys doing that for other ops guys. What we're thinking of doing to make this even more self-service though is since we have get in front of it so let the developers make those changes potentially and then it would be the ops guys potentially plus doing it. So I think that's the goal I think in the future here. Any other questions? All right, if not, that's all we got.