 Today, I want to talk to you about our approach for doing continuous delivery with DCOS. I'll explain a little bit about what our motivations and goals were, and then introduce Spinnaker, the system that we're using, as well as get into some of the details about how we use it. My name is Will Gorman, and I work on a team developing deployment solutions at Cerner. We're a health care technology company. We've been using Mesos for a few years now, and have more recently started to adopt DCOS as we look towards containerizing our service workloads. One thing that we found is that deploying software is challenging. I'm going to guess this is probably not a controversial statement. If anybody disagrees, I'd like to talk to you afterwards so we can do what you're doing. But whenever we're doing a deployment, this is a time when our systems are at a greater risk of downtime or instability due to a problem with the new code being deployed or the configuration change, or maybe some steps in a manual work plan being performed in the wrong order or omitted. So one thing that we're constantly evaluating is how we can reduce the risks of our deployments and make them more reliable and consistent. Continuous delivery is something that we believe can help us in this regard. With continuous delivery, we're striving to release smaller changes more frequently and avoid the situations whereby deploying a large set of unrelated features together, there is some unexpected interaction between those when they're deployed, or if there is an issue, maybe more difficult to pinpoint which of those features is responsible for it. But continuous delivery can also help us decrease the cost of our deployments by requiring us to address any points of friction or inefficiencies that we might be able to otherwise tolerate or overlook when we deploy less frequently. And finally, by decreasing the delay between the development of a feature and getting that into the hands of our users, we're able to provide value to them more quickly. So as we started to consider what kind of attributes a system that we would use for continuous delivery at CERNR would need to provide, there were really three key ones that we landed on. And those are that it would need to help make our deployments safe, automated, and flexible. So I'd like to spend just a minute on each of those and get into some more detail. At CERNR, the safety of our deployments is of the utmost concern. Since we work in health care, the reliability of our services can really have an impact on people's lives. If we have a downtime or some other interruption of service, that means that during that time, a clinician may not have access to all the information that they need to provide the best possible care. So we have to be doing everything we can to prevent issues as a result of deployments. It's certainly true that with DCS, we have the ability to do things like roll back to an earlier version much more quickly and easily than we could with our traditional VM-based deployments. But that alone isn't necessarily enough to get us to the level of quality and assurance that we need. We want to make sure that we're doing everything possible to catch those types of issues before they reach a point where they can affect our users. So we have to make sure that we're doing certain things consistently and reliably during every deployment. So some of those things we want to do on every deployment are make sure we're running unit tests or integration tests as part of our builds, deploying first to a sandbox or non-production environment before production. After we deploy, we want to execute smoke tests to send some kind of synthetic traffic to services just to make sure that they're doing what we expect after the deployment. And of course, we have to deploy without introducing any kind of downtime. Finally, we want to be able to adopt deployment strategies like canary deployments, where we can deploy a new version of our service and send a small percentage of our traffic to it so that we can analyze and observe the characteristics and behavior of that change before it gets to the point where it can impact a large portion of our users if there is something that doesn't work right. So with all these additional steps that we want to perform on every deployment, we're really saying if these are going to be performed by humans manually, now we're actually adding more complexity to our deployments and introducing more opportunities for human error. So when we look at automation of our continuous delivery, this is important not just so that we can go more quickly or reduce drudgery, although it does do that. Automation is also an important enabler of safety. For example, if we think about the parts of our deployment plans that hopefully we're not executing that frequently, those are probably the parts that deal with errors and recovering for errors. If we're automating, we're able to define not only what should happen when a deployment works successfully but also when something goes wrong and we can be testing those steps of operations to make sure that they are accurate and correct. Finally, automation is important to get us to the place where we are actually physically able to deploy those changes frequently enough to get the benefits. For flexibility, we have a need to manage and automate the deployments for many different teams and groups within CERNR that are using DCOS. A lot of these teams have very similar needs for their deployment patterns, but some variations here and there. However, it can be organizationally challenging to kind of get everyone to all agree to adopt the same fixed approach. Engineers kind of left to their own devices naturally like to automate things so we could anticipate if we just kind of turned everybody loose on DCOS and said have at it, we would probably see an explosion of hand-built deployment automation tools from these different teams. This would undercut the kind of efficiency gains that we're also hoping to get from continuous delivery. So what we really want to do is make it easy for teams to define deployment pipelines that fit their needs but also to be able to share and reuse those common patterns and practices that are common across all of them. Ideally, we could visually represent these pipelines in a way that would make it easier to build and reason about. So, if we look at this set of requirements, we can see we're describing a system that's not exactly trivial. And the question that we were faced with at this point was okay, how are we actually going to put a system like this into practice? And as you've probably guessed from the title of this presentation, we didn't have to build something ourselves. We were able to leverage Spinnaker. So Spinnaker is an open source, multi-cloud continuous delivery tool that originated at Netflix as the way that they would perform continuous delivery and orchestrate their deployments to AWS. However, since being open sourced, it's been expanded by the community to be able to manage deployments to Google Cloud Platform, Azure, and even container schedulers like Kubernetes. At the time, we first started looking at Spinnaker. It didn't support deploying to DCOS, but it quickly became clear to us that if we were to build that bit of integration ourselves, it would be a big win and get us much more quickly to the place where we wanted to be with continuous delivery. Not only would it meet or exceed all of our requirements for safety, automation, and flexibility, but we could take advantage of the same proven methods that power Netflix's prolific deployments. So some of those features that Spinnaker offers to enable continuous delivery are the ability to define pipelines that take individual commits all the way through production. Along the way, they can run continuous integration builds and tests for deployments to providers like AWS. You can bake virtual machine images from your builds. And then at the time to deploy, you can define deployment strategies like Blue Green or Canary and validate the results of those deployments through Spinnaker. Finally, you're able to promote builds across environments so you can ensure that every release is going through that defined set of steps in the correct order that you expect. The multicloud capabilities of Spinnaker are a great way to keep our deployment process from getting too tied to a single cloud provider. We know that today we want to use marathon on DCOS, but if in a year or two we decide that we want to do Kubernetes instead, we don't want to be in a position of kind of yanking the rug out from underneath all the developers that have come to rely on automating their deployments through the DCOS APIs. So we want to have kind of a layer in place that can serve to translate and Spinnaker meets that need. Spinnaker and being multicloud is not aiming to be this perfect abstraction that homogenizes and provides a least common denominator set of features across the different providers. There are definitely differences exposed and you can't expect to just write a pipeline once to deploy to AWS and then automatically apply that same pipeline to Kubernetes. However, the fact that we have our pipelines defined declaratively as JSON to Spinnaker means that the process of moving from one provider to another is now simply a data transformation since Spinnaker can handle the mechanisms of actually interacting with those different providers to run the deployment. So I want to look next at some of the kind of concepts and ideas that are central to Spinnaker in the way that it has an opinionated model of how cloud deployment should work. The first of these is the application. Spinnaker deployments are centered around applications and these are the way that it logically groups all of the deployments of some individual deployable artifact. So a Spinnaker application is not equivalent to a marathon application. It's kind of a higher level grouping that can encompass multiple deployments of that application like across dev and production. So here we can see in Spinnaker this is our mesoscon application and digging a little further down we can see it's comprised of clusters. So we have a cluster for dev and a cluster for production. In Spinnaker a cluster is confusingly enough not related to a DCOS cluster. It's just that kind of sub-level of grouping in an application for deployment for a specific purpose. Going down one more level we get to the fundamental unit of deployment in Spinnaker which is a server group. This represents a set of instances that are all running the same version of some deployable artifact. So in AWS with Spinnaker a server group would be like an autoscaling group of VMs all running from the same AMI. In DCOS a server group is the equivalent of a marathon application. So here what we see two different versions of our service version 000 and 001 each backed by a different marathon application. The primary difference between how Spinnaker thinks of server groups and Marathon thinks of applications is in Marathon if we want to change something about our application like deploy a new tag of our Docker image or change some environment variables we can just update that application definition restart the service to pick them up. In Spinnaker just about the only change that we can make to a server group after it's been created is to resize it and scale it up or down. Anytime we want to actually make some other more substantial change we have to create a new server group as we see here. So this is kind of showing side by side comparison between the Spinnaker UI and the DCOS UI where we can see in Spinnaker several clusters containing server groups those are each mapped to their backing Marathon applications by means of Spinnaker's naming convention. So it kind of controls those Marathon application names and we see the different pieces of the Spinnaker organizational structure reflected in those names. As you might imagine if we're treating applications this differently in creating new Marathon applications to deploy new changes this probably has some implications for how we interact with things like load balancers in DCOS and that's something that we'll see a little bit later on. So when we want to create a new server group we have a wizard here that exposes pretty much all of the same attributes that you see in the DCOS UI so we can set the memory and CPU limits for our containers, choose which Docker image to run, set network end points environment variables, secrets, labels, health checks, all of that stuff. The only difference really is instead of just setting a name for the Marathon application we're going to fill out some Spinnaker specific values like the stack and the detail and let it derive the name. It also automatically manages that version suffix that we see on our application so if we tell Spinnaker to create a server group that's part of a cluster that already exists here in this case the mesoscon dev cluster it will automatically look and see what version it should be applied to next. In Spinnaker we can also allow server groups to, or clusters rather to contain server groups running in different regions. So in AWS or another infrastructure provider like that a region is just what you would expect it to be. For DCOS we have regions mapped to separate DCOS clusters. So in this example we have Marathon application running in this cluster named US East one and another one running in a cluster named US West one. So we're able to use Spinnaker to provide kind of a lightweight form of federation across these multiple DCOS clusters and have a single place where we can view the status of our deployments across those clusters and manage things like migrating applications from one cluster to another. So now we get to pipelines which are really the heart of Spinnaker. In Spinnaker a pipeline is a mechanism that lets you describe and automate complicated deployments and can interact with other parts of your CI like Jenkins, Travis, GitHub and Bitbucket. Pipelines are composed of a series of stages where each stage represents some kind of common operation that happens in a cloud deployment. Pipelines support taking input parameters and can also produce output that's added into a pipeline context that gets passed along for subsequent stages to take advantage of where they can do things like make decisions about what branch of the pipeline to execute or conditionally enable or disable certain stages and run stages in parallel. To start our pipelines they can be triggered manually or by a number of different options that give you some flexibility with how you actually want to plug this into your existing CI setup. So we can have a pipeline be triggered by a webhook from a GitHub repository or by watching for the successful completion of a job in Jenkins or Travis. The pipelines can watch for a Docker registry and look for a new tag to be pushed for an image. And then pipelines can also trigger other pipelines within Spinnaker allowing us to build up kind of a graph of pipeline executions to separate things out and make pieces more reusable. And then finally we can just have a kind of cron-like schedule for running our pipelines and run whenever some scheduled interval expires. In DCOS the trigger that may be the most interesting for us is that Docker registry trigger since this is kind of an easy way to use just whatever your existing setup is for building images and let Spinnaker just take over from the point where that image gets pushed. So we can define what image we want and then some criteria about the tags that we want to look for. It's important to note that by default Spinnaker is not looking, it's only looking for new tags. So if you're doing something like reusing a tag to point to a new image as frequently done with latest, Spinnaker's not picking those up. And the reason is that we want to make it easy to roll back within our pipelines. So we always want to have a way to easily refer to those previous versions. In our case what we do usually is apply kind of a build timestamp to our Docker image tags and then use a regular expression to match on only the part of the tag that we're interested in. So we have a number of different kind of stages and operations that can be used in Spinnaker for interacting with DCOS. And rather than go through all of those in detail, I just want to touch on a few of the ones that are used most frequently. Unsurprisingly, the deploy stage gets used a lot during deployments. In the deploy stage we're able to provide that configuration like we saw in that create server group wizard earlier. But we can also do things like deploy multiple server groups at the same time. So in this example we set it up to deploy the same server group configuration into multiple DCOS clusters. Most of the time in a pipeline, after we've deployed something, we're also going to need to destroy something and remove something. Hopefully that's because the new version of our application is working great and we're able to tell the destroy server group stage to target that previous server group and remove that older version. But if something goes wrong and the new version doesn't work out, we can also tell it to target that newest server group and remove that to get us back to the state we were in previously. Sometimes in pipelines we want to have a place to kind of just execute a script or some other kind of logic to do something like run some smoke tests or perform a database migration. In Spinnaker we have the ability to initiate Jenkins jobs from a pipeline and that's one way that we can have kind of an execution environment for general purpose tasks like these. But since in DCOS we have a full container scheduler at our disposal with the metronome framework, we also have this run job stage that lets us define the configuration for a metronome job, what container we want and what command to run in it and then we can have that execute during our pipeline. As part of this we can also have this task choose to write out a JSON or properties file into the meso sandbox for the task and Spinnaker can read the contents of that file in and make that available as part of the pipeline context as information that subsequent stages can use. The way that stages can make use of that context is through a really powerful and flexible feature maker that enables a lot of dynamism in our pipelines and that is pipeline expressions. So here we have it's using the spring expression language to let us define little kind of snippets of code that can be placed in just about any attribute of those fields when we're building a pipeline. And with pipeline expressions we can do things like generate attributes of a pipeline at runtime for things that we don't know when we build it or evaluate tests for certain conditions to determine what branches to take. So we can see we have kind of a nice autocomplete UI that pops up here when you're using a pipeline expression to show you what values are available in that pipeline context using values from previous runs of that pipeline. And in order to use a pipeline expression we can do something as one example. If we have an input parameter for our pipeline that we want to set as a label on our marathon application in inside of that label field we can refer to that input parameter through a pipeline expression and at the time the pipeline is run we'll get a prompt to input that parameter and it will propagate through into that label. So we'll see a number of other examples of pipeline expressions later on. So Spinnaker also has a number of features that it provides as a way to make deployments safer and less risky. So I want to touch on some of those next. So with execution windows in Spinnaker we want to make it possible for a commit to get built and go all the way into production automatically but we don't necessarily want that to happen as soon as the commit is pushed. We may only want to allow our deployment to happen during off peak load times if our service follows some predictable pattern for load or maybe we only want to deploy at times when we know we have enough people available to deal with anything unexpected that happens. In Spinnaker for any stage in a pipeline we can define an execution window that specifies a range of time during which that stage is allowed to execute and if the pipeline were to run outside of that time it will be held in a suspended state waiting for the time to begin or waiting for this manual override to be provided and allow the pipeline to continue. Since pipelines are very powerful and flexible unfortunately that also means that they can contain errors or things that cause something unintended to happen when the pipeline runs. The last thing that we want to find out that has happened is that a pipeline had some flaw that caused us to shut down all of our active instances in production. In Spinnaker to make it harder to do something like that there are traffic guards that we can use to specify that certain clusters of our application are critical and if any pipeline operation or manual operation would leave those in a state where there are no active instances Spinnaker will stop that operation from occurring and leave the cluster as it is. Finally with our applications we want to make sure that they are really resilient and able to be highly available even in the face of failures or the loss of individual instances of our application. And this is something that Netflix has really pioneered with their kind of chaos testing suite of tools. It began with the original Chaos Monkey that was used to randomly terminate instances in AWS and make sure that the services that those instances belong to still remain available. In the most recent version of Chaos Monkey it's been rewritten to use Spinnaker APIs instead of AWS APIs directly. So this means that now we're able to use Chaos Monkey with any of the providers that Spinnaker supports deploying to including DCOS. So now at an application by application level we can define a schedule or a frequency at which we want Chaos Monkey to possibly randomly terminate an application but also mark certain clusters as protected and prevent that from applying to them. So next I'd like to go over some of the ways that we've put these capabilities to use in our deployments and some of the patterns that we're using with Spinnaker today. So one thing we can do is provide a way to enforce timeouts for deployments. So Marathon, if we have something going wrong with our deployment, like a health check is constantly failing and it's restarting the services or we have something that's causing those containers to crash immediately. Marathon is just really determined that eventually it's gonna work. So it keeps retrying that over and over despite all evidence to the contrary. Eventually we're gonna realize, you know what? No, I don't think this is gonna work and have to go in and say, okay, let's go and manually but stop that deployment and figure out what happened. With Spinnaker, instead we can define a pipeline that cleans these up automatically. So if we did something silly like trying to deploy an image with a tag that doesn't exist, Marathon's gonna just keep trying to pull that over and over but in our deploy stage in Spinnaker we can override that default timeout of an hour with a lower value and then follow this deploy stage with a destroy server group that is only going to be conditionally enabled based on the value of this pipeline expression down here. This pipeline expression is looking at the results of that deploy stage to see if it contains a timeout exception and if it does, then this stage is going to be enabled. So when the pipeline runs and that deploy stage times out we go and forcefully stop that running deployment and can then fail the pipeline and send a notification so someone can go in and see what happened. It's also important to note that since when we're deploying with Spinnaker we're deploying a new server group, a new Marathon application. At this point we haven't actually affected any of the instances of our running application so when we stop that new deployment that's failing we're just back to the state where we were in before. Another concern that we have is since we work in healthcare, regulated industry we have things and processes that have to be done as part of our release and deployment process and not all of these are very easy to automate. So we don't want to put off doing continuous delivery until we've figured out a way to automate every single one of these. So in Spinnaker instead of having to do that we can use manual judgment stages in our pipeline. So a manual judgment stage is a point where the pipeline is going to pause execution and then can send a notification to a user or an authorized user really because we have a permission model so we can make sure that we're only allowing certain users to approve those manual judgments and allow the pipeline to continue. So the user can come in and provide an input to that judgment to determine whether it should allow the pipeline to continue or to stop or that input can also be used as a way to control branching within the pipeline. So we've seen with the Docker registry trigger that we have this nice way to be able to initiate a pipeline whenever we have a new version of our Docker image that we want to deploy. However, sometimes we just want to deploy a configuration change like updating the environment variables for our application. We would prefer to not have to go into the Spinnaker UI and make those changes and then manually run the pipelines in order to deploy configuration changes. For one thing, if we have configuration changes that affect a number of applications that's going to be a fairly tedious process. But also we'd really like to keep that configuration externalized from Spinnaker in version control. So this way we can have pull requests to review configuration changes to make sure that they look okay and then also have access to the history of version changes so we can see how configuration has changed over time. In Spinnaker, if we keep our configuration as JSON in a GitHub repository and then use that GitHub repository as a web hook trigger for our pipelines, when we get to the deploy stage, we can edit the JSON of the pipeline directly instead of using that, you know, the form. And in that, we can kind of replace any arbitrary section of that JSON structure with the results of a pipeline execution. So what we're doing here is in a deploy stage for the environment variables that we're going to use for our marathon application, we're going to use this JSON from URL helper function. There are a number of different helper functions that are available for pipeline expressions to use that downloads that JSON file from our GitHub repository that was the one that triggered the pipeline and then inserts those values as the environment variables. So we can really have kind of any arbitrary piece of our applications configuration come from our version control now. So now I want to get back to the item that I raised earlier about how do we deal with load balancing in DCS when we are using Spinnaker to deploy separate marathon applications for every change. So in Spinnaker, it really expects load balancers for providers to have a programmatic interface that it can control. So think of ELBs for AWS or services and ingress controllers for Kubernetes. However, in DCS, if we're using marathon LB, we assign labels to our applications and let marathon LB automatically figure out the routing rules. So it doesn't really work with that model that Spinnaker wants to use to control load balancing during deployment strategies. So this is somewhat inconvenient, but we can kind of work around it by building additional stages into our pipelines that restart applications with different labels to change load balancing. But we have a bigger problem with marathon LB and that is that it doesn't support putting multiple marathon applications in the same backend. So if we had this old version and new version of our application running as two separate server groups deployed by Spinnaker, both of them trying to use the HA proxy V host label to say that they want traffic for the food.example.com host, marathon LB is only going to route to one of them. Fortunately, it's deterministic. It's based on the sort order of the name of the application. But when we actually want to flip over to the new application, when we shut down that last instance of the old application, there's this period of time where there's no instances there for us to be able to handle requests, but marathon LB hasn't yet gotten the event from the marathon event stream to reconfigure and point over to that new version. So we're going to be dropping requests on every deployment. That doesn't sound great. So marathon LB is not gonna be a viable option for us to use with Spinnaker. Fortunately, we were able to find an alternative in traffic. So traffic is another load balancer built with microservices in mind that supports getting routing configuration from a number of different dynamic backends like Kubernetes, Docker swarm, and fortunately for us DCOS. Traffic uses the same kind of label-based approach for determining routing rules as marathon LB and in fact also supports a subset of the marathon LB labels for compatibility. But it also has its own additional set of labels like this traffic backend label that if we put the same traffic backend on two different marathon applications, traffic will round robin requests between both of those. So now by using traffic with Spinnaker, we can get back to having zero downtime deployments. Unfortunately, we still have that problem that we can't really provide exactly the same kind of baked-in deployment strategies as other providers do through Spinnaker load balancer interface since we're doing this with labels. However, there's hope in DCOS 110, the new Edge LB external load balancing component does have a API that we can use. And so this has just been released. Had to update my slides today when I realized it was no longer in beta. But we are planning to very soon take on the work of integrating the Edge LB as a Spinnaker load balancer and when we do that, we can start providing deployment strategies that are kind of built in on par with some of the other providers that Spinnaker works with. So next I wanna show kind of an example of putting a number of these pieces together and running a deployment for an application that starts with a commit to GitHub and takes it all the way through a dev and production environment. So here we have an application running a dev and a production cluster with a number of each of those. So we can go and look at those dev and production endpoints. We see this very fancy modern web application displaying an incredibly creative hello world message. So when we want to push an update, we'll go and look at that first pipeline. This is our pipeline, which is configured to start with a Docker registry push. So we're defining what image we want to look at and telling it to watch Docker Hub. So since we've left that tag field blank, it's gonna be looking for just any new tag. To get things started, we can go and make a change in our code to replace hello world with a much more appropriate hello mesos con and then commit and tag this. And what's gonna happen next after we push is that we have Docker Hub set up to receive a web hook from this GitHub repository and it's gonna start building our new image using the same tag that was the tag that we pushed to GitHub, so .18 in this case. So now it's building and while we wait for it to start, we can go and see what's going to happen first and that is we're gonna deploy this new version to dev. Since we didn't specify the tag ahead of time, since we didn't know, we can see here we're able to, for the deploy stage, tell it to use the image from trigger and let it resolve that tag at runtime. So as the build of the image finishes, our pipeline starts running and we're now, Spinnaker is now making calls to DCOS to tell it to start deploying our new server group. So we can see our dev version now has picked up that change and is showing the new version of our application. And the next thing in our pipeline is going to be to execute a smoke test where we're using a metronome job where we're just going to run a simple curl against that dev endpoint to look for the appropriate response. So we can see this is set up to either pass and continue on or fail and initiate a rollback. If we look in dev, we can see both of those server groups running with the old and new version. And then if we look at how that pass and fail check actually works, we can see we have another pipeline expression here that is looking at the results, the status of that smoke test stage and determining if it succeeded and allowing the pipeline to continue on. So our smoke test has passed. We're now on to destroying that old version. We're going to be shutting down the hello world version that was running previously, which in our cluster view, we can see that that's just shut down. And then the last thing that our dev pipeline is going to do after that finishes up is we're going to have a pipeline stage at the end that is our deployed to prod pipeline. So we're going to now be initiating another pipeline. In our production pipeline, the first thing that we need to figure out is how to tell it what image to deploy since this wasn't triggered by a Docker registry push. So for that purpose, we have a find image from cluster stage where we can tell Spinnaker to look at a server group and another running cluster. In this case, our dev cluster and pick up that image that was running there so we can know that we're taking the version that we've just deployed to dev and taking that into production. The first thing we do now is one final check before we deploy this live behind our production endpoint, we're going to deploy at this latest.prod endpoint instead of just the prod endpoint to kind of give us a chance to make sure everything looks good. And the way that that has been set up to work is through a traffic label on our marathon application where we're telling traffic to take requests for this host name and route it to this kind of new version. After that runs, we're now going to have a manual judgment stage in our pipeline. So now we have to go and say, I get an email that says I need to go verify something. When I click on the email, I get a link that will take me directly to the point in the pipeline that I need to provide my judgment on. So since we saw earlier that everything looked good, the hello mesos con will let that pass and continue on. So as that continues, now we're going to actually deploy our first instance behind the production load balancer. We're only going to deploy a single instance rather than at full capacity. And the way that we make sure to put it behind that production endpoint is with that traffic backend label, prod mesos con that matches the label on that version that's already running. So as that deployment proceeds, we can then start to see as this refreshes that now requests are being round robin between the old and new version of the application. And then we had another run job stage to kind of validate that everything looks good. And after that passes, now we start scaling up our new version from one instance to five to get it to full capacity. And then after that completes, we start shutting down the old version until now production is running entirely on the new version of our application. So if this has been interesting and you'd like to try it out, the DCOS integration is available in the latest release of Spinnaker, 1.3 release. Just kind of a caveat about differences between open source and enterprise DCOS. It's a new DCOS. And so that's kind of been the version that we focused on supporting features for. So right now with open source, the difference is that the authentication methods in open source DCOS are not supported. So for now, in order to try it out with open source DCOS, it'll only work on a cluster with authentication disabled. So not necessarily a production ready solution there, but ready to try out at any rate. I hope this has been a good demonstration of why we think Spinnaker is so valuable for bringing safe, automated, and flexible continuous delivery pipelines to DCOS. If you wanna learn more about getting started, the Spinnaker.io site has a number of resources, including kind of a quick start and a code lab for getting things running on DCOS. And then questions can be directed to the Slack channel as well. Also, if you're interested in solving challenging healthcare problems that can improve healthcare for millions of people, you should come work at CERNR, shameless plug there. But other than that, that's what I've got. So thank you and enjoy the rest of Mesa'sCon. We have time for one question or two question, maybe? Yeah. Hi, I saw that you talk a lot about docker containers in high level applications. How do you manage the underlying infrastructure? Do you manage your own servers or is this all cloud-based? So we have a couple of different, we have some deployments in the cloud and then also deployments in our data center using OpenStack right now. We're trying to get to the point where we can deploy, but yeah, that's how we've been running it. How do you handle disaster recovery? Do you have a dedicated DCOS cluster in a DR facility and if so, do you use this to deploy to it? So yeah, that's our plan with the ability for one Spinnaker to control deployments to multiple DCOS clusters. We want to have kind of that backup cluster available and ready with nothing running in it and then just be able to update the Spinnaker pipeline configuration to say, nope, go to this cluster now and then bring the applications up in the new one. Last question. Yep. This looks great. I had it historically ignored Netflix stuff because it was all AMI based. Are you guys still the sole contributor or is the community adopting this as a tool for DCOS and MISOS? Right now, we're the sole contributor for the DCOS feature in Spinnaker, it was just released a few days ago, but hopefully, yeah, we're wanting to get more involvement and it'd be great to have more support for better support for open source DCOS as well.