 Today, we'll talk a little bit about the history of how Spinnaker has been installed. I know for a lot of people it's been a pain point for a long time. And then I'm going to talk a little bit about how JPMC have evolved that process internally over the last couple of years. Where we're at right now with installing Spinnaker, where we're going. And I'll also have a little bit of a demo to show kind of what we're doing in this space. So for those that don't know me, I think most of the people in the room do. My name is Matt. I joined JPMC back in 2019, end of 2019. As a graduate, I'm now a senior associate. So I'm not sure what that is in the new terminology, software engineer 3 or something. Working in Jet. So it's the brand name for our internal tool chain for developers. And I focus on continuous delivery. So basically Spinnaker. And I'm now a technical lead for Spinnaker within EMEA. So I'm kind of the architect for a lot of the stuff that Spinnaker does. Most of you will know me from open source stuff. So we contribute quite a lot back, bug fixes and stuff like that. But we find running Spinnaker in a large corporation like JPMC. So just a quick overview of Spinnaker at JPMC. So it says here 40k deployments a month. I don't have the latest figures because when I was putting these together, somebody wiped our elastic search instance. But that was the figures I had at the time. And we're expecting to grow that significantly this year. So we're currently going through a huge migration from on premise cloud. So Kubernetes and Cloud Foundry, pivoting most people towards AWS. And that's our company strategy is that they want as many people as possible want to AWS as soon as possible. So today Spinnaker deploys to on-prem targets. So we have an internal Kubernetes platform. And we also have a huge Cloud Foundry footprint, which is where the majority of these migration targets are coming from. And in terms of deployment to public cloud today, Spinnaker, we support EKS obviously. We've just rolled out support for ECS. We've built our own integration with Terraform for infrastructure deployments and coming this quarter is Lambda. And then in the future, so we've got an Azure control plane coming this month, last month, so Spinnaker do Azure very soon. And some point further in the future, Google Cloud. So AWS is probably the 18, 90% use case for most of the applications at JPMC. In terms of why Spinnaker, I mean, we've had a lot of testimonials, particularly around people who were deploying to Cloud Foundry with legacy tooling. They've seen a huge reduction in time for deployments. So one of the drawbacks of our legacy tool was that it would do deployments in sequence. For example, obviously with Spinnaker, there's no need for that. So they see huge reductions in deployment time. Right now we've got over 4,000 applications in Spinnaker. Some of those, a lot of those are ours. Probably upwards 20, 30 of those applications are internal stuff. And then the rest are client applications. We have Spinnaker itself. We have two instances running private cloud and public cloud. Although we're trying to merge those into one hybrid instance. So deploying Spinnaker, a bit of a history lesson, right? For a lot of people it's been a very big pain point. It's quite a big barrier to entry to try and understand how to deploy Spinnaker. It's obviously itself made up of multiple different Java services. I think 10 was the figure quoted this morning. So originally it's always written to kind of orchestrate deploying all of those different things at the same time called Halyard. And Halyard, let's just say have problems. CLETE came around as a successor to Halyard and then kind of got immediately abandoned. It never really took off. So Halyard has kind of been the de facto standard for installing Spinnaker for the best part of five, six, seven years, how long it's been around. So I went and cherry picked a couple of GitHub issues for searching for Halyard. So you can see here, you know, Halyard errors, how to install using Halyard. I could not install using Halyard. You've been to how to deploy, apply, failed, so on. Yeah, if you search Halyard and GitHub issues, you'll get over a thousand results which shows it's quite a big problem for people. And obviously Spinnaker itself isn't the easiest thing to operate. So having such a difficult barrier to entry means that a lot of people pick it up, get stuck and then give up. So the current landscape for deploying Spinnaker, Halyard was deprecated a little while back which was an interesting choice because nothing really took over. So we've had a deprecated deployment tool with no replacement for quite a while. More recently there was an RFC to make the Spinnaker operator by Armory, the default deployment tool. I think that went through Semi recently. Some honorable mentions here as well to Opsomex's Spinnaker home chart. And I think Carl from, well, Excel's horse. Also put together some customized stuff for Spinnaker which I think we're talking about moving into the open source org at the moment. So we've gone from having one deprecated installation method to three different ones. And we don't really have official say on which one you should use. So obviously we started our journey with Spinnaker at JPMC quite a while ago. So I've been working on Spinnaker since I joined back in 2019. And obviously these three options weren't around back then. So when we started, we were using Halyard. Yeah, I'll say you can still use Halyard if you want to, but I really wouldn't recommend it. So basically at JPMC, right, being a financial institution, we're very heavily regulated. And that causes a lot of problems when you try to use something like Halyard. But you need to evidence what you've deployed, when you deployed it and how. So for those that don't know, Halyard is basically a standalone service that you deploy as a container. And then you basically exec into it, run commands to configure, basically generates a configuration file for you. And then you type another command to apply it and it spits out a load of Kubernetes manifests and applies them to your name space, which isn't great. When you have auditors and regulators asking you where your manifest is stored, when they were committed, who changed them. It's basically impossible because they're all generated at runtime. So this slide, right, Halyard, Halyard, obviously relying on exacting in also poses a challenge for us because on our internal cloud platforms and public cloud platforms, they've been locked down. So we can't actually exec into pods outside of dev, which is obviously a problem when your deployment tool relies on you exacting in to do your deployment. So we basically had a choice between run Halyard in dev and deploy to production, which I'm sure you can imagine the regulators wouldn't have been too pleased about. Or we can do it from Docker desktop, which I'm sure they'd be even less pleased about. So it basically made it a non starter for us. The other key problem we have with an installation tool, which isn't unique to Spinnaker, but for CD tools in particular is is how do you deploy a CD tool, right? If you have a major outage and your entire application gets wiped out, how do you redeploy your CD tool if your CD tool is gone? So anything we needed to do needed to be able to run manually without any CD tool involved. And finally, we really didn't want to use customize for this. We have a whole load of environments and customize very quickly gets quite bloated. So you can imagine having a load of patches for every single environment you're running. When you're running more than 10 environments, it gets very complicated very quickly. So what we came up with in the end was a solution basically where you could use Spinnaker to deploy Spinnaker, or you could deploy Spinnaker using some script in the event that Spinnaker wasn't available. And what we've done is basically use the integrations within Spinnaker. So we've built our own Terraform integration, but obviously that's not the line on Spinnaker. You can go on Terraform yourself. We've used Helm, so Spinnaker can bake Helm charts, but again, you can use Helm on your own without Spinnaker. And for anything else, we've also got some Kubernetes jobs. And again, you don't need Spinnaker to deploy a Kubernetes job, but Spinnaker can. And with all of those together combined into a single pipeline, we have end-to-end provisioning of a Spinnaker environment. So infrastructure, any dependencies, all of the stuff we've built, APIs. And Spinnaker itself, all in a single pipeline, which means if we want to spin up a new development environment, a developer doesn't have to go into Halliard, remember 50 different Hall commands, and make sure they change that one that doesn't overwrite someone else's change. And it's all in Spinnaker's UI. And the other key benefit is using Helm, everything is templated. So kind of similar to Customize, where everything's a patch, but slightly less bloated and that you can put all your variables in a single file. So for example, this is the repository we've come up with, it's kind of small. I don't know if you can see it on that screen. But we basically have a Helm chart for Spinnaker itself, a Helm chart for all of our dependencies. So all the APIs we've built internally around Spinnaker. Storage folder, which is stuff like Redis, stuff like that. And then you'll see there's a couple of shell scripts, which can basically run independently to do all of this stuff outside of Spinnaker. So as I said, if Spinnaker's gone, you have to be able to run this install manually from your desktop or something. So this script is, I think 102 lines at last check, it's very simple. It basically just runs Helm. So it's not complicated to understand. It's not difficult to maintain. It's very simple. And it's also basically doing what Spinnaker does, right? It does a Helm template and then a QTL apply. So it's very difficult for something to go wrong or something to be broken. And if it is broken, it's very easy to troubleshoot because ultimately it's two commands. And I was saying earlier, you know, a new environment. So this is a variables file for a development environment. A new environment is just one file. So here, right? We've got configuration for MySQL for Cloud Driver Front 50 and Orca. We've customized that would be three patches, one for each of those services. Here it's just one variables file. You plug in the values. They get injected. So with that, I'm just going to switch and show you some of the pipelines that we've come up with. So it's locked out. This would be the opportune time for us to have a production incident and this not to work. But basically you can see here that I've created a whole load of pipelines that do different things to do with spinning up a Spinnaker environment. So this orchestrate one at the top is basically responsible for just invoking all of these other pipelines. So this is what a developer would come in and click if they wanted to spin a completely new environment up. And basically all it does is it spins up the infrastructure, sorts out certificates using a job, does the dependencies helm chart, which I showed earlier, and then install Spinnaker itself, right? So I can quickly show. So this is the infrastructure orchestration one. So obviously we have, for example, a pipeline that deploys our Aurora database using the Terraform stage that we've built. And we replicate this for all the different infrastructure components we have. But again, all of this is just template files, right? So it's all variables. A developer doesn't have to go in and reproduce all of this stuff every time they want to provision a new environment. They just plug in a different number of name and then create. So what we can see is if we click on Spinnaker pipeline, ultimately it's very simple, right? So we have this create name space stage, which is another custom stage, which basically does account onboarding. And then ultimately it's just a helm template and then deploy the output, right? Which I'm sure everyone can agree is a lot better than deploying Halyard and then going in and running a whole load of commands and then hoping that it works and when it doesn't panicking because there's no support. So you can quickly see the output of this single pipeline is basically a for instance, and it's not going to let me zoom in, but we have all the Spinnaker services running. So this pipeline took a couple of minutes and I'm slightly cheating because the infrastructure was already there. So in reality it would have taken a bit longer. But deploying Spinnaker itself is like under a minute. So that's kind of where we've got to in terms of where we're going. So I mentioned that it's easy to spin up new environments. So Spinnaker itself has a bit of a scaling problem, especially with Cloud Driver. So we talked a bit at the platform SIG about how Cloud Driver gets very resource hungry very quickly and we have a lot of accounts, especially big accounts. And at JPMC every team has their own AWS account. So you can imagine a firm with 50,000 developers, that gets very big very quickly. So what we've kind of realized is that having one instance of Spinnaker isn't going to be enough. So what the benefit of this is, right, in the same way that we can spin up loaded developer environments just by committing a single variables file, we can spin up production Spinnaker environments by committing a single variables file and then with a little bit of magic with routing we can dynamically move teams between instances of Spinnaker completely transparently and handle any load that gets thrown at us. If overnight the AWS migration happens and we suddenly have 30,000 AWS accounts onboarded to Spinnaker that's not a problem because we can just spin up a load of Spinnakers and it will all be transparent behind the scenes. So that's kind of the overview of what I've been doing or our team have been doing over the last couple of months. So we're using this Helm chart to deploy Spinnaker to production today. We're not quite there with Spinnaker deploying Spinnaker, we're very close. Next couple of weeks hopefully we'll have production end to end and get rid of everything else we're doing manually. But this is kind of what we've settled on. And the interesting thing is that if you use Spinnaker to deploy Spinnaker you kind of get all the benefits of Spinnaker as well, right? So obviously if it was configured properly you'd be able to see all of the Spinnaker services here running so you could create an application per Spinnaker instance for example. And the other key thing comes back to evidencing which I mentioned earlier. So we've built a whole load of integration with evidence stores for deployments for our customers to use when they use Spinnaker. So basically as a regulated company we have to evidence every single deployment that happens when it happened, who did it, all that kind of thing. So we've built a custom integration using Echo to do that. And now if we use Spinnaker to deploy Spinnaker then we can evidence ourselves, right? There's no need for us to go in and say I promise that I can figure it out properly. Here's a screenshot. I definitely didn't break production, it wasn't my fault. There's an end to end audit trial of who did it, who triggered it, what parameters were used and what happened, right? So that's kind of the thinking behind using Spinnaker is basically dog fooding, right? We're using the own things we've built and in that way we get a lot of benefits. So I mean that basically concludes the demo. I would run this pipeline right now but we've been having some issues with our container registry and I don't want to bring down this environment in case someone's using it. So instead I'll just switch back over here and that is basically the conclusion. So if anyone has any questions about what we've done, feel free. Do you want me to get you a mic? Is that the way the other one went? Yeah, so the trigger at the moment is manual. I don't know where it went. I think someone took it. So the question was what trigger is being used to deploy Spinnaker. So right now we don't have automated triggers set up for this. The way it works in JPMC is kind of convoluted because of all the evidencing stuff we have to do. We have to evidence that we tested everything properly and so on, which means just using a Git trigger doesn't work because there has to be something in the middle. So right now it's a manual trigger. So the developer would go into the Spinnaker UI and click play on the pipeline. What I'd like to see happen is like a webhook or something that gets evidence complete event from some downstream system and just does things automatically, which is kind of there for teams building things internally, but not really there for external things like Spinnaker, right? So obviously our internal teams have to evidence all of their build events, their test events, all that kind of stuff, but that doesn't really apply if you're just procuring an image from Docker Hub and then running it, right? So there's a bit of manual steps there, but it's still better than deploying Halyard and doing all that manually. I think there's a switch at the bottom. I think having more than one is good. I think we should probably advertise one because getting into Spinnaker is already quite daunting, right? You come onto the website and it's like, here's a whole load of configuration you have to do. And then to say first you need to choose between one of 10 installation methods is a bit of a, okay, maybe I don't want to do this kind of thing. So I think we should probably push one as a preference or at least choose one that will document everything to do and then, you know, because ultimately the steps are kind of transferable, right? Whether you're using Helm or Customize, ultimately it's generating the same manifest. So as long as you document what the format of that manifest is, it doesn't really matter what tool you use. And I don't think we've done a very good job of that in the past, right? Because Halyard kind of abstracts all of that stuff from you. Spinnaker itself is just Spring Boot, right? But Halyard, let's say you want to add another cloud provider account in Halyard. You just run how to deploy, not how to deploy, how config account add or whatever the command was. And you just live a command line for that. And you don't really think about what that configuration looks like behind the scenes because Halyard handles it all for you. And Armory's operator is kind of the same thing, right? You don't actually do all of the configuration yourself, you just plug in a couple of values and it generates stuff for you, which isn't necessarily a bad thing, but it means if you have to go from using that to just raw Customize or Helm, you can have a mindset shift from some abstraction layer to Spring. I'd like to see a set along one as the preferred approach. What about one is, I mean, I really don't know. I think the operator probably has the most traction right now. But yeah, we'll see what happens. The question was how much of this is used for development and testing versus deploying to production? And the answer is really that we want to use it through everything, right? We basically want to consider Spinnaker itself as a normal application that you deploy, right? You deploy it to dev once it's merged to develop or whatever you deploy to test, run for your tests and promote it to production. So that's where we want to get to where every single configuration change is automatically pushed through dev test prod, whereas now it's more of a flow where you make a change in dev manually. You check it works when you merge it and then you deploy it to test and then you test that manually and then you merge it to master and then you schedule a manual release. And it's a bit of an annoying workflow. And it's also very slow, right? People aren't incentivized to deploy to production if it takes a week to schedule it and then do all the testing and everything. So we really want to treat Spinnaker as any other software component that you just deploy incremental changes. I think it's going to take us a while to get there. Part of that will be confidence, but at the same time, after using this for a little while, it's really easy to see the benefits. So I think that confidence will grow very quickly. Yeah, you'll pass the mic there. Hi. Is it appropriate for release management? Sorry, can you say that again? The release management, when you put a version and you want to manage which version is in which environment? So you're talking about Kailh, the managed delivery aspect or? Release management for when you deploy a version environment. I'm not sure I'm getting the question. So you're asking how we do release management or? Yeah, with Spinnaker. Yeah, so it kind of ties back to the external image thing I was talking about, right? It's kind of a weird one where most teams at most companies are building their own thing. So they have the whole process of release management kind of down to a T. When you're procuring an external image is a bit different, right? Because there's not really a development cycle. You just procure the new version and then test it and then release it. So in terms of release management, we kind of just wait for the stable version to be released. So, you know, the latest LTS or whatever, procure that and then release that. We don't use like the nightly builds or unvalidated builds or anything like that just because it would be an evidencing nightmare. I mean, I'd like to, it would be nice to use master. We'll have the latest and greatest but just due to the nature of the company. I don't think that's realistic. There's too much too much at stake for us to mess up. I hope that answers the question. I think we got like 30 seconds if anyone has any questions. But yeah, I'm here. We're here today and tomorrow. So if you're interested in what we've done and what our approach was happy to chat about it. I was going to say, we'd like what I personally would like to consider open sourcing this stuff. At the moment, it's kind of very heavily JPMC specific. So it'd be kind of difficult to just say, here you go and use it. But I think as we standardize more on public cloud rather on private cloud, it might get a bit easier for us to say. Not as much of this configuration is very JPMC specific anymore. And we might be able to say, make a case at least for JPMC to consider it. So if you are interested in that, let me know. We can keep you up to date on what we're doing. Cool. Thank you very much.