 Hello. Good afternoon. I'm Mark Christensen and this is Dan Woods. We're both principal engineers in the Cloud and Compute Division of Target. I am principal engineer for the OpenStack team and Dan is in our sales and retail operations, also known as stores. And I'm just going to do a quick introduction and then Dan's going to do the bulk of the presentation and then we'll both be available for some questions afterwards. I know Dan needs to catch planes so we're going to need to finish up pretty promptly but we'll answer as many questions as we can. So Target, as you can imagine, we have some pretty large operations and for several years we've been in various public clouds and for about two years we have had OpenStack. And over the years we've had various different kinds of software that we use for deployment but they were internal, they were proprietary and they were generally tailored to specifically to the particular platform onto which the deployments were happening. So about six months ago we started working with Spinnaker to try to get a coherent CD pipeline across all of the deployment environments that we use, internal and external. And about four months ago we were very fortunate to have Dan join us. Dan was the original and principle developer on the Spinnaker project so we're really glad to have him and Dan has come and his main focus is actually getting Spinnaker now to the point where we use Spinnaker to deploy not only to our internal and external cloud platforms but directly to our 1800 stores. And so with that I'm going to turn it over to Dan and he'll let you know how that works. Thanks a lot, Mark. Yeah, no small task, believe me when I say that but it should be interesting. Great, thanks everybody for showing up for this. Just real quick, before we even get started I just want to know how many folks here actually have heard of Spinnaker? Okay, vast majority of the room, that's great. How many folks are using it? Okay, all right, great. How many folks are planning to use it? Okay, and so it seems like the remaining group is here just to learn more of what it is, right? That sound about right? Okay, excellent. Great, yeah so as Mark mentioned, I'm a principal engineer. I'm currently working on the stores platform to be able to continuously deliver out to our stores environment. Now that's something separate from what I'm going to talk about today. Today what I'm going to talk about is the Spinnaker integration with OpenStack and I'm doing that primarily because this is an OpenStack conference, right? And so that makes sense. Okay, so let's go ahead and get started. Just right out of the gates, what is Spinnaker? Very simply put, Spinnaker is a project that was developed by Netflix as a replacement to their prior solution which was known as Asgard. Now Asgard enabled teams within Netflix to deliver to AWS in a way that pretty much hadn't been done before, right? It facilitated their mechanism for continuous delivery. Now Spinnaker is an evolution on that that brings together all of the parts necessary for a true continuous delivery pipeline. To be able to say that you check in code and something will happen with that code, like it gets built for example, an artifact gets generated and then that artifact in turn generates another artifact, an image that we can then take and go out and deploy. Now Spinnaker has turned into a cross-organizational initiative where multiple companies and multiple teams across multiple companies are collaborating on it to bring basically a multi-cloud presence to Spinnaker's capabilities. Now whereas Asgard was designed from the outset to target AWS, which is primarily where Netflix's runtime is, Spinnaker was designed with the idea in mind that being able to take this philosophy of continuous delivery and immutable deployments across multiple clouds would enable basically an open source community around the project that ultimately would result in a much better, much more engaged and much more versatile project for a bunch of different teams. So in that respect, it's we can effectively say this is a global continuous delivery platform, right? So if you're an organization who is running in one or many clouds or for example, if you have a public cloud footprint like with Amazon or Google or whatever and then you have data center cloud capabilities as well, say with OpenStack or Kubernetes or any other mechanism that Spinnaker supports, you can use the same tool, the same patterns and pretty much the exact same pipelines to deploy your code out to across all those environments. Now this really opens the door quite a bit for release engineering teams, which is really who Spinnaker I think benefits the most are the folks who effectively own the underlying infrastructure but want to enable a mechanism for application development teams to be able to get their software out much more quickly. So in that respect, we have the multi-cloud support that's built in and then there's also a measure I would say of cloud management, right? Where you can deal with all of the resources and artifacts that exist in a cloud in a way that makes much more sense than just dealing with say instances and load balancers and directly interfacing with security groups, I think as probably everybody knows in here, dealing with these things on an individual level, it can be really disjointed, right? It can be somewhat difficult to kind of wrap your head around and especially if you're an application team that doesn't want to have to worry about the infrastructure and you want to be able to move very quickly and do things in a more streamlined way, Spinnaker allows those teams to work with those resources in the context that they care about, which is their application, right? And so that's a really kind of a nice thing. And now even more than that, this allows you to have a scalable infrastructure. And so when we talk about infrastructure and scaling at this level of doing things, really what we're talking about is enabling a vast number of teams to do their jobs, right? So in terms of scale, request per second and those kind of things, at the infrastructure layer, that stuff, it obviously matters, right? But it doesn't matter as much as how quickly we can empower teams to deliver their software, right? And how much we don't need to get involved with their day to day when they need to do a release or anything like that. So these are the kind of things that Spinnaker really opens the door on and does a really good job with. And it does this by working with deployments through immutable pipelines. And immutable pipelines are really the bread and butter that Spinnaker brings to the table. It's the area that I think we should focus on a lot, and it's stuff that we're going to look at today and talk about. And even more than that, there's this idea of this concept of deployment strategies in Spinnaker. And deployment strategies basically are a way of saying that you have what we could call a blessed way of delivering software to the cloud, right? And as part of that, you want to be able to share that concept across multiple projects or across multiple deployments within your own application and do that in a way that works best for your organization, for your teams, for the projects that you care about. And we'll talk more about this as we go. The third aspect of this is that we have zero downtime deployments, right? This is extremely important, highly available deployments, right? So that we don't have any service disruption as teams need to roll out software. And also, the mechanism for pipelines enables you to create canary deployments, these alternative paths that consume a bit of the traffic where you can monitor metrics and you can see the new changes live without them affecting the full scope of your production deployment. And last but certainly not least is this idea of chaos engineering. Who here is familiar with Chaos Monkey? Yeah, just about everybody, right? So this is a big deal. So the new version of Chaos Monkey only runs through Spinnaker, right? And Spinnaker's ability to run pipelines and to act as a pipeline executor really facilitates that ability. So having Spinnaker in your infrastructure, you're immediately going to be set up to succeed with this kind of failure injection testing, right? So I want to talk a little bit about what Spinnaker is not. And I think that this is important because it's easy to look at a tool like this, especially when it provides the level of capabilities that it does. And just very quickly short circuit where I'm sure that a lot of your minds are going. And let's just start out by saying right out of the gate, Spinnaker is not a platform as a service, okay? It doesn't provide that full end to end everything in the middle that you would possibly need. And it's not a replacement for your existing infrastructure, right? What Spinnaker is, as simply put as I possibly can, is release engineering, okay? It's the mechanism that application teams are empowered to use to deliver software continuously to the cloud. Whether that's public cloud, private cloud, containers with Kubernetes, whatever you want to do. Second point here is that Spinnaker is not an abstraction over the cloud. So there is a common model. There is a, we'll call it a set of guardrails that different cloud implementations need to operate within. But as far as abstracting away the capabilities of what the underlying infrastructure is able to do or is doing, it doesn't do that, right? And that was a conscious decision to say that if we try to abstract the cloud away from all of these underlying providers, we're never going to get it right, right? And there are going to be capabilities in AWS that don't fit into OpenStack. And there are going to be capabilities in OpenStack that don't fit into Google Cloud and so forth and so on, right? So it's not an abstraction over the cloud, but maybe you could consider it a molding, right? So there are some common set of data, there is a common set of data model to represent your infrastructure, but it gets out of the way really quickly when you need to do something that's very specific, right? And so we'll show you some of that kind of stuff too. And again, as I said before, Spinnaker is not a replacement for your infrastructure layer, right? It's a way to enable app teams, like I said, to think about the infrastructure in a way that makes sense to them, which is their application footprint, right? Not instances separated from server groups or heat templates or whatever the case might be, right? It's just about them being able to see what their cloud deployment is and be able to build pipelines around that so that they can safely deliver software. So again, CrossCloud, this is a really big aspect because it opens the door quite a bit. And at present, there are implementations for Amazon, Kubernetes, Google Cloud, Open Stack, Azure, as I understand it is a work in progress, but it thrives nonetheless. Cloud Foundry, a lot of work has been done there. And recently, I think that Oracle has picked it up with their bare metal cloud service, so they're providing a cloud driver implementation. And we're also pretty privileged today to have the folks from Veritas in the group who have also collaborated on the Open Stack implementation, which is a big deal because being able to drive Open Stack with Spinnaker is something that was built in conjunction, a joint effort between Target and Veritas to provide this implementation for you to be able to work with Open Stack. And it works with the Open Stack V3 APIs, which is called, what's the name? Mataka, right. I can never get that right. Mataka it is. And furthermore, beyond that, now, this I think is an equally big deal is that the manner by which Spinnaker is driving the Open Stack details under the covers really falls in line with what the best practices for doing this kind of stuff are, right? And that I think is an important thing because if you try to touch the infrastructure directly for every deployment that an application team needs to be able to do, you're going to get a thousand different ways of doing it, right? So the way that Spinnaker talks to Open Stack, there's pretty much just one path of doing any one thing in particular. And that's kind of a nice thing. It's reproducible, we'll say. So like I said, Spinnaker does have some level of a model that exists around it, right? And these are just common things that we know are going to exist at all cloud providers, right? And common constructs that need to be in place for us to be able to do things like build up those pipelines and build up a really nice presentation layer for you to be able to manage your application footprint. So at the very top, we have this concept of an application. And an application represents any credentialed target that can potentially be for deployments, right? So say an AWS account or say a project in Open Stack, we'll talk more about the corollary types. A cluster belongs to an application. And a cluster is some grouping of version server groups. And a server group is a homogeneous set of versioned instances. So you have a version of your application, right? And you say version 1.0.0, and I need five instances. Now, that creates a server group that corresponds to version 1.0.0. So that we know at any given point in time, those five instances that exist within that server group are all going to be the same version, they're all going to have the same configuration, they're all going to be launched with the same configuration details. Now, if we create a new version of our software and we roll it out, that'll create a new server group with five new instances that correspond to, say, version 1.0.1, right? Or something to that effect. And then, of course, an instance is exactly what it sounds like. It's a virtual machine, in our case. Load balancers are also there and security groups. And these are, I think, pretty straightforward things. Does anyone not know what these things are? Virtual machines, everyone knows what virtual machines are? Okay. All right. Just testing the waters here. So in terms of the OpenStack integration, Spinnaker's data model maps pretty cleanly to OpenStack, honestly. An account is an OpenStack project, and you have some credentials that are able to access that OpenStack project. Cluster is really sort of a meta thing, so there's really no difference here. A server group is an OpenStack stack. And it's driven by a heat template. An instance is a virtual machine, which is life cycle managed by the stack. A load balancer is an LBAS v2 object. I believe it doesn't work with v1. Correct. It doesn't work with v1. And a security group is exactly the same as you would find an OpenStack, right? So this is a pretty clean integration, which is really nice, right? Because it doesn't take a big cognitive shift to step away from just your regular OpenStack way of thinking about things and then jump into the Spinnaker mindset around things. We don't always find that, which is great for our purposes. Now, does anybody work with OpenStack? Anybody else here, I should say, work with OpenStack at a very fundamental level? The individual pieces? Does anybody on like the base infrastructure team? Okay, cool. So a couple of folks. Okay. So this will feel familiar, I should say for you. Spinnaker's components. I want to talk a little bit about the way that Spinnaker is architected and the way that it's put together. So Spinnaker is a microservice architecture, right? And it has a set of optional components that can be included or not as your deployment could see fit. And then the purpose behind doing individual microservices for the different functionality of the system is so that they can be developed, they can be upgraded, they can be scaled, they can be configured according to anything that they need without a full rollout being necessary of the underlying system. And that's kind of nice. So we'll talk about some of these components are. So Spinnaker has some central components. And these are basically the heart, these are the heart and soul of what Spinnaker does. This is what it does. It drives everything. At the very top of this list we have a cloud driver and that's pretty much exactly what it sounds like. This basically manages all of the communications with any of the backend clouds. So in our case, we would talk about this with OpenStack. This is what actually talks to the OpenStack APIs. In turn, it's also responsible for observing any state changes at that cloud. So it does this periodic polling of the OpenStack endpoints to see new instances were launched, new server groups were created, new load balancers that might be available, server groups, etc, etc, right? And then it takes all that stuff and it throws it into a cache that is very quickly accessible. And it then takes that cache and it can turn it into those Spinnaker model objects that we care about later on. The next thing that we have is Orca. Orca is a, it's a very clever name, if I do say so, for the central orchestration engine. And this is pretty much what does all of the work of making CloudDriver do its thing. This is also the mechanism that drives those immutable pipelines. So we create a pipeline definition, we put some details into that about how we want to do a rollout or a deployment that we want to do. And then Orca is responsible for making calls to CloudDriver and getting feedback from CloudDriver to see if a stage had succeeded or anything like that. Third, but certainly not the least important of the central components is a project called front50. And again, keep in mind, these are all microservices, right? So they're individually deployable, they can be managed on their own. Front50 is responsible for storing and serving application metadata. And this metadata includes things like who is the owner of an application? What port does an application run on? What does this application do, right? That kind of stuff. And then on top of that, it's also responsible for things like storing pipeline definitions for how you roll things out. So you go into spin occur and you build a pipeline that speaks to how you'll deliver your application and that gets stored via front50 and then is recalled later via front50's APIs. And now an important thing about this is that part of the integration with OpenStack is that front50 can use a Swift object store to be able to store this data as well, right? So we can really dig deep into our incorporation into the OpenStack platform. Some of the optional components that we have, which are maybe things that we should have all the time, but these really kind of drive some of that periphery function that we care quite a bit about. Echo is a system for not taking and responding in the same message, believe it or not, but actually for receiving and storing events. So anything out there can generate an event and send it over to Echo. Echo, in turn, will look at that event and any corresponding pipelines that are set to be triggered off of that event type and then it can go off and based off of that event it can trigger a pipeline via Orca. So for facilitating that kind of stuff, we have another project called Egor. And Egor is responsible for doing long pulling of external resources. And these can be things like a really good example is Jenkins. So Jenkins has a really nice integration with spin occur. If you're using Jenkins as your CI tool, Egor can be configured to pull your Jenkins instance to see when your builds have completed. Your pipelines can be configured with a Jenkins trigger so that anytime that Egor sees that a build has completed, it'll then send a message over to Echo, which in turn will kick off your pipeline. We're going to demo all this. I'm not just going to talk the whole time. But I wanted to give you this brief rundown so that you could see this is not child's play. We've really thought through a lot of the difficult problems of delivering software. We've tried to do it in a way that's sustainable and is able to scale with your organization. So a big part of that in OpenStack land and in other lands that run virtual machines is we have this project called Roscoe. And Roscoe is effectively our bakery. And a bakery, what a bakery does, it takes any artifact that you've generated. Now you can as part of your build, you can generate any OS package, an RPM or a Debian file for example. And you can publish that somewhere to a centralized YUM repository for example or to an app repo. And then Roscoe, upon seeing that you have generated this from say your Jenkins build for example, is able to look at that file, understand what the version of it is and then kick off a bake. And what that does is it launches a new virtual machine with some configuration for a base image that you've created. And then it will go and install that package. Question? Yes. So the question was can I compare that to Packer? It actually uses Packer under the covers. So it's a way of basically it's an adaptation layer for adapting Packer into the Spinnaker infrastructure. And to that end, it's very configurable because you can configure the Packer templates to operate in exactly the way that you care about. And then finally, we have FIAT, which is basically an authentication and access control microservice within the Spinnaker platform. So basically all of the services with APIs on them are able to integrate with FIAT as a way of saying whether or not you should be able to access that API or not. And then we have gate, which is another creative name for the API gateway. And basically this is the entry point. So all these are back end services that are out there doing the real work. Gate then sits in front of them and accesses the entry point to the platform. So if you're building services that consume the Spinnaker API, for example, to do something like read data out of Spinnaker or to even go and trigger your own pipelines, you would communicate with these services through gate. Whereas if you think about a corollary with OpenStack, for example, there's five or six microservices that drive the OpenStack platform. And you go to the identity API and it tells you about where all these other things live. Instead, with gate, gate is just the central entry point to talk to all these back end services. And then deck is the Spinnaker UI. Let's get to a demo. It's been 25 minutes. Let's look at something for real. So again, how many folks have actually used Spinnaker? Okay, a couple. Great. So this is the UI. This is deck. And what I'm going to do is I'm just going to walk through setting up a new application. And we'll call this application OpenStack Summit. Now, an important thing to understand about Spinnaker is that a lot of Spinnaker's infrastructure is driven via naming conventions. And that is a legacy artifact of the fact that Amazon didn't always have tags and didn't always have other data structures for representing metadata about some resource. So there was a decision a long, long time ago at Netflix when they were first going to the cloud that a good way to tie things together would be through these naming conventions. And it's really scaled very well, very impressively. So for example, when we create a server group, it'll come through as the name of your application dash. And then some other detail, which we call a stack, that'll create a new cluster for you dash. And then it'll say like v000. And that's a sequence of what number of server group it is. And that's a mechanism for defining ancestry to know which is an old version, which is a new version. So we'll get to some of that. And I'll just say Dan Woods at target.com. So anyway, my point with that is if you try to put a dash in your name, it's going to tell you that you can't do it, which is cool. And so I only work in prod. So that was a joke. Tough crowd. We're going to get to the funny jokes, I promise. Okay, great. So we've created a project called the OpenStack Summit. The very first thing that you can see is we're brought to the application page. And from here we can see we have no server groups, right? We also have these options at the top for accessing pipelines. We have no pipelines. And then we can see any tasks that have run. We have only done the one thing, which is create an application. So we can see that we've done that. And then over here we have load balancers, security groups, and the configuration about our application. So this touches on all those things that we covered as part of those model objects that Spinnaker cares about. There's an account. There's clusters. There's server groups. Server groups have instances. There's load balancers. There's security groups. Right? Those are the things that matter. So very quickly, we can just come in here and say that we want to do a deployment to prod. No stack. You can choose the subnet that corresponds to the project that you have configured. These will just show up. I was talking with Emily Burns earlier from Veritas. And she pointed out that one of the really nice things about Spinnaker is that it auto discovers the infrastructure. So this mechanism that it uses within CloudDriver for polling and caching the details about your infrastructure and then tying it together means that you don't need to do a lot to get Spinnaker going in your environment. So presumably you've created a subnet. You've created networks. You've created all instance types. You have images that are available, this kind of stuff. Spinnaker will just auto understand that stuff once you fired up against some good credentials in your project. So let's go ahead and choose something. I'm just going to choose a random instance type. And then I'm going to say that I want to launch against the target SenOS 7 latest image. I'm just going to get this fired up real quick. And then I'll come back to that. We just want to launch into, sorry, let me do one other thing real quick before I do that. What I'm going to do is I'm going to create a load balancer. And this is great because creating load balancers in OpenStack is hard. So what we can see is this load balancer will be named OpenStack Summit. So it auto defaults to the name of our application. If we were deploying this into a particular stack, we could say that this is my web stack. You can see the naming convention follows along with it. I'm not going to do that. And I will do something else and I'll just take this off, set it to a TCP health check that will correspond to the instance port that our application will run on. So we click that and we're presented with this really nice real-time updated task monitoring screen. And this will tell us about exactly what it's doing. So the first thing that it does is it goes and it creates the load balancer, right? The next thing that it does is it waits for CloudDriver to recognize that the load balancer is being created, which doesn't take very long at all. It's already done, I can tell you that. And then it does some extrapolation and then it refreshes the cache on demand. Great. And we're ready to go, right? Deleting a load balancer is equally as easy in Spinnaker, which is also hard to do in the UI. So it's a nice way of rounding out some of that infrastructure. So let's come back over here. Let's create a new server group. I'm going to throw my load balancer on it, and then I'm going to tell it that I wanted to go to my default security group, which by default I open up completely, which we should all do, right? I'm not a security expert. I never claimed to be. Okay, so we'll click create and very similar to the creating of the load balancer, what this is doing is this is kicking off an orchestration that's going off and doing a set of what we call atomic operations. And these atomic operations are what's necessary for getting these changes enacted in our cloud. And I meant to have this up as well so that we could see it actually happening. We'll get there. Yeah, so this is going on in the front. And now we're just waiting for some instances to come up. And they will. But in the meanwhile, while we're waiting for them to come up, we can see that they're already there. We can even get some details about them. And these ones may not come up cleanly because I don't have anything in there. But so if we come over here, now we can see at our orchestration layer, we have some stacks. Let me go ahead and switch this so that this may look closer to what you have. Right. So we can see that we've created a stack, right, which corresponds to our server group. We can see it's named exactly the same with a certain number of instances, one instance of this in this case. And we can see that it's come up. And it's just about ready to go. It's just waiting on a few other things to happen behind the scenes. But at this point, we have an instance up. We have some IP addresses for it. We can see what security groups it's in. And we can see some details about it, right? If we look at the server group, we can see the launch configuration, what we actually launched instance types, et cetera, et cetera. So with just a couple of clicks, we've created a new application, we've created a load balancer, and we've deployed an image. That's pretty rad. And I'll also show you the load balancer, just to prove. I'm not tricking you. Here it is. Perfect. Great. Looks great. Okay. I'll start at the beginning. Okay. Now, again, one of the biggest and nicest things that Spinnaker really brings to the table for application teams in terms of continuous delivery is not being able to click a lot of buttons, which is what I just showed you, but is the ability to build these delivery pipelines, right? These kind of robust things that will go off and do some stuff, right? So a very common approach for a new application getting off the ground is to create what we call build, bake, deploy pipeline. And this is pretty much exactly what it sounds like. We build some version of the software, we bake it into an image, and then it deploys it automatically. Does that make sense to everybody? Any questions? Nope. Okay. Then we will demo a basic delivery pipeline. So I have another application called Dan Test, which I created. And this has a simple build, bake, deploy pipeline, right? Exactly like what we just talked about. And basically off of this, what every pipeline has the capability of doing is being triggered by some change in state, right? Some external impetus to say it now is a good time for this pipeline to run. And in this case, we've configured it to work off of Jenkins. I've configured Igor with a Jenkins master that I've created and have handy. And I've given it the name of the job that I wanted to be able to trigger off of. Now in the background, what Spinnaker is going to do is it's going to pull this Jenkins install against all the projects. But when it sees that this particular project has built something, it will pull that in and that will trigger this pipeline that will in turn go off and bake this particular OS package. Now what I'm building is an RPM file. And I'm publishing that out to a YUMR repository that exists out there in the world. And I've configured that same target SenOS 7 latest image as the base image. And I've configured it with the name of SenOS 7. So anything that my build produces, any RPM that my build produces, is going to be overlaid on top of that base image, right? Which is kind of nice. And then so once it's done baking, it'll then take that bake and then it'll go out and deploy it. And one of the things that we get the capability of specifying as we're going through and doing these deployments is what kind of strategy that we want to be able to enact as part of the rollout, right? And this is where those deployment strategies come into play. Now you can build custom deployment strategies that correspond to your particular use case. We'll talk about that more in a minute. But right out of the box, we have the ability to do what's known as a Highlander deploy. And basically what this says is, does anyone know the TV show Highlander? There can be only one, right? Maybe I'm getting a little old. But basically what this will do is it'll go out and deploy the new version. Once the new version is out there and it comes up healthy and everything looks really good and it's ready to start taking traffic, then the pipeline will say, okay, new version is out there, it's up and running, and then it'll just go and destroy everything. Just scorch earth the entire infrastructure. And that's kind of a nice thing, right? Especially if you're in dev, because you only want that one version. You only want the one, right? But I don't choose that. Not often, not for prod, which is where I live, right? Another option is that you can say, I don't want any, and you can build it into your pipeline, the mechanism that you have for cleaning up the rest of your deployments that are out there. Or you can just say, hey, every time that I build a new version, it's fine, I'll go and manually clean them up myself. And there are buttons in Spinnaker that I'll show you in just a second to be able to do that. Now the third option here is actually a really cool one. And this is something that maybe some of you have come to know as blue green deployments or something along those lines. Netflix calls it red black deployments. So in the earlier days of doing this kind of stuff, as I'm sure many folks here will remember, blue green deployments were effectively a way of saying I have an active environment and then I have a not active environment. And basically any time that you would make a version change your software, you would deploy it to the nonactive version and then you would take the active one out of the load balancer and put the new one in. And so that model doesn't really fit in an immutable world because basically what we're saying with immutable deployments is we're going to create all new servers. And then when we're done, we're going to destroy all the old ones, right? There is no persistent state. It just is able to always be rebuilt from that immutable state. So the idea with red black deployments is that, well, number one is that those are Netflix's colors, right? So that was an easy choice. But also is that you have one active version and one basically rollback version, a hot rollback, right? That allows you to get a new version up and running, keep the old one around but out of traffic rotation for some period of time. And then when you feel comfortable with the new version, then you can just go in and wipe out that old one. So that's the one that I like the most because it allows you to keep that ancestor version around for some period of time. And it won't just get rid of it. This is hands down the safest way to do it, especially when you consider this idea that we have the ability to just come in here. So here's a red black deployment, for example. The top one that you're looking at there is the one that's enabled. The bottom one, it's a little bit grayed out. And you can see in the server group detail pane, it tells us that it's disabled. But what we have the ability to do is to come in here and say that we want to go ahead and re-enable the old one that we know is good, right? This is the ancestor version. And then come back to the new one. And in just a second when this one is re-enabled, we can come over to this one and we can say we want to just go ahead and disable it. And we're just going to lie to this for a second. And so we can do that kind of hot roll back state in a very quick way, right? And for application teams, this is a really nice thing to be able to do because it lets you move. And that's the important part, right? Didn't Facebook say move fast and break stuff? Well, everything that Spinnaker is about is move fast, but don't break anything, right? Be incredibly safe. Be very conservative. Don't destroy anything or ruin anything. So that's kind of a nice thing. So I have a load balancer hooked up to this guy. And what we can see here is, so now we have the newer version is out of rotation. We can see it's disabled. It's instance white-red. It's very unhappy. But our old versions here. And this is just an application that has been created, packaged into an RPM and deployed onto our base image. It's just an index.html file with nginx, but it's pretty cool, right? I love writing apps like that. If I had some PHP in there, it would have been better. But what we have is it just says hello Open Stack Summit, right? And it's got one exclamation mark, which is not nearly enough enthusiasm. So if I come over here and I enable it, then what I can say, and then I go ahead and disable this one, then what you'll see in just a second, it takes just a second because it's doing that pulling behind the scenes and it's building up this cache data, right? And we need the cache data because we don't want to overwhelm these APIs every time that we want to see something or do something. And this UI will refresh in near real-time, right? So it'll refresh basically as fast as the cache can get updated. And so we have the cache in the background to do that kind of regular pulling that's not going to overwhelm the APIs behind the scene. So this is way more enthusiastic, right? I've rolled back to the new version now. And now you can see it says hello Open Stack Summit, but we get two exclamation marks. That is rad, right? I know everyone loves that. Okay, cool. So I have a Jenkins box. How much time do we have? Oh, we are. Okay, sorry. Okay, great. So I have a Jenkins box here that has a build. And this is building my project. And you can see that this project for build number nine, what I've done is I've produced that RPM. And if we go to the pipeline, what we can see is that build nine was triggered, whoops, and it'll link me out to that. That would have been a faster way to get there. But we can see the details about what's happened here. And from within here, we can get those low-level packer details that tell us exactly everything that it was doing to get that stuff installed. It installed our RPM. Our RPM said hey, you should also install nginx. And then it placed our index.html in the right location. And the outcome of this is that we got an image ID. And that image ID, in turn, is what was passed off to the deploy stage. And that's pretty much how we do that. And I'm just going to jump back quickly to my slides. While Dan's doing that, I think we're going to wrap up soon. But since we've gone over, if anyone has any questions, we'll just be out here at the launch, target sponsored launch. So if anyone has any questions, please come on out. And we can hopefully help you out. Yes. So just the last thing I want to talk about really quick. So the immutable pipelines are a big deal. Another really big deal is these deployment strategies, right? So we talked about the red-black deploy, which fires up the new one, takes the old one out of rotation. We talked about the Highlander, which scorched Earths, all the old stuff, right? Now, if you don't like any of those ways of doing things, and you or your organization or your project has a different way of doing it, you can build something called a deployment strategy, and then you'll get that drop-down option within the UI, exactly the same as we saw the red-black and we saw the Highlander. So you can build it that makes sense for you. And this can go off and do things like create a change ticket or send an email to somebody or any number of things that you could possibly conceive in your head. It can go off and destroy somebody else's project if you want it to. And all of that comes for free out of the box. Now, currently for OpenStack, it's a work in progress. It works currently with the other cloud providers. We'll get it there in the short term. It'll be good to go. Okay, great. So we'll call that wraps. All right. Thank you all. Thanks, everyone.