 Hi everyone, my name is Danny. I have with me, Ravine. We're from JPMorgan Chase and firstly like to appreciate everyone for a turn up to this session. I know it's the end of the day, of the second day and what we'd like to talk about is our continuous delivery platform at JPMorgan Chase and what it's really doing to enable our journey to the public cloud. So we've had our public cloud primarily run on AWS for a few years now and we're in the midst of a huge migration effort. So you know, last today and yesterday we've seen quite a few tools out there on the showroom floor and some of the questions I like to ask myself is what does it really take to enable this at scale, right? So we've seen a lot of tools with lots of different things. There are compliance features, deployment capabilities but what it turns out is usually when we bring something like these these kind of tools into JPMorgan Chase it's actually takes quite a lot of effort to really bring it into the firm, realize the business value and ultimately scale the adoption of it. So for those who work in companies that are at scale, where you have a thousand developers. I hope you can take something from this session when it comes to providing an awesome developer experience that's easy for users to deploy to the cloud as quickly and safely as possible. So it's very easy to talk about what JPMorgan Chase does so some of you have Chase accounts, but I really want to look at it from a technology perspective because that's really what the heart of what we do. So across the firm, we have over 55,000 technologists and we've been growing pretty rapidly and I think we were just like 50,000 last year and 35,000 of these are actually software engineers. So there are multiple lines of businesses. So the corporate investment bank, asset wealth management, cybersecurity, risk compliance, chief technology office teams, divisions. We've all have different use cases and goals and across the 55k technology, so we have many types of job families. So primarily software engineers, infrastructure folks who are really concerned about how we operate our bare metal, data scientists, architects, engineering managers. Primarily our software engineers really want to focus on building awesome products with little effort. Engineering managers, they're concerned about the productivity of their team, making sure they're finding the right practices, building things safely, finding the best practices. The architects are really concerned about building things properly that can scale so we can serve to our billions of customers around the world. And as a result, we've got thousands of applications out there generating millions of deployments per month. So when you think about bringing a continuous delivery tool, you really have to think about all the different use cases and the problems that all our technologies are trying to solve. So I mentioned some of our lines of businesses and for example, with asset wealth management, you're going to get a lot of applications around portfolio management, trading on the buy side, research, data scientists have their use cases for various things around machine learning and our corporate investment bank, huge, huge systems that require running large infrastructures. So when it comes to deployments, deployments safely to production is absolutely critical. So the question out there is really how do we enable continuous delivery at scale for Jaypin Morgan? So if there's one thing you want to take home from this talk, is this slide. So we'll talk about more in detail about the building blocks required to enable continuous delivery and ultimately modernization in Jaypin Morgan Chase going to the public cloud. So I've put here five building blocks. So the application framework, so we have something called the Manate framework and this allows us to set the foundation of the code base. One of my favorite things, blueprints. How do I do, how do I make an application? For example, how do I make a micro service or how do I provision my EKS cluster properly? Pipelines as code, an extremely important ingredient for making our solutions portable and shareable across the community. And obviously our continuous delivery platforms. So we'll talk about how we use Spinnaker at Jaypin Morgan Chase, what modifications we've done, how we've built around it. And then finally something called cloud parties, not the usual cloud party, not the usual party that you've attended before. So the first ingredient, application framework. So at Jaypin Morgan, we have an application framework called Maneta. So this is essentially customization on top of Spring Boot. And most of our applications are using our application framework. So we're primarily at a Java shop. So what you expect when you check out a sample application that uses this framework is your typical practices around authorization, authentication, login, things to help you feel your pojos when it comes to civilization and how you do HTTP. But more importantly, all the configurations so we can get you started so you can integrate with the internal systems. Such as ID Anywhere system for authentication authorization. And it allows us to centralize and really standardize how we want applications to be built. So going back to the 55k or 35k software engineers out there, we have to be in a multiple lines of businesses. We have to be very careful when it comes to making sure that developers are building stuff properly in a predictable way. And the Maneta framework that we have has realized many benefits. For example, we can simply update a package such as log4j in case there's an event of availability and rolled it out firmware. So we have tooling within our CI ecosystem that allows us to schedule upgrades for tens of thousands of big bucket repository out there and generate pull requests at mass. So the users don't have to do that. There's no action on the user. So you can have teams that are very focused on watching these vulnerabilities and performing these upgrades for people. And interestingly, 80% of our application footprint are actively maintaining the latest version of our framework. So as we go through the slides, there's going to be a few more ingredients laid on top of this application framework. And then you start to see, realize how important this piece is. So in J.P. Morgan, we have something called blueprints. So the question we're trying to answer is how do I build something properly that is safe and compliant and adheres to the standards and the engineering practices that we expect across the firm. So we have two types of blueprints, infrastructure and application. So let's look at the infrastructure one first. We have a line of business that really focus on providing the infrastructure to all the other lines of businesses. And also the look after the infrastructure for our public cloud. So for example, there's a team out there who's responsible for the database product line. And they're a team that have a pretty much consistent of such a matter experts in the database domain. So they look after all the different databases that you used to use in AWS. And they're responsible for ensuring, for defining how database should be or cluster, for example, should be stood up. So when it comes to provisioning EKS cluster, for example, it's not the case of just logging on to the AWS console and pressing the few buttons to provision your EKS cluster. There are a lot of rules because we are very regulated company that we have to follow to ensure that we have the right role set up. We're not using any of the features that are not approved. And we go through a rigorous process with our cybersecurity team to get things like a SOC1 approval to show that we're not moving passwords around. We log in information that is not confidential. And as a result, the product lines are responsible for creating all the terrible modules, right? So they expect a bunch of inputs from the developers. With those inputs, they'll provision the infrastructure. And that way, everyone who's provisioning infrastructure to the public cloud is doing it exactly the same way. And so if we have the infrastructure blueprints in the form of Terraform files and Terraform modules, we can start to package that up with the application framework. And also, we'll package up CI and CD configurations. So Jenkins for CI and Spinnaker for CD. And similarly, with application blueprints, the question we're trying to answer is how should I set up my cloud native application code? So what we do is based on the key use cases out there, we would package up the application framework with the infrastructure blueprints, all the files that you need, but also include anything that you need to, that is required to containerize the application. So once these blueprints are in place, we need something that developers can use to easily consume these. So a couple of awesome products we have internally is something called the Engineer Channel. This is a developer portal for the entire firm that allows you to browse all the different products that we have internally, the blueprints, and how to get started. And we generate quite a crazy amount of traffic, tens of thousands of hits per month. It is pretty much the firm's entry point to how do I do something at JP Morgan. And because the blueprints are everywhere and standardized across the firm, then we get very predictable workloads running on the public cloud. We have tech primers. Tech primers are essentially how do I guide in it across the Engineer Channel. And this is where you usually see the most fit for. And then finally, instead of just following lots of documentation, we have something called Kickstart. And Kickstart is basically Spring Starter on steroids, right? So it's a user interface and command line tool. And you can use that to decide what type of application to specify what application you're building and where you're applying to what kind of infrastructure you need. And it generates your entire Hello World application with all the necessary blueprints integrated with the CI and CD configuration and everything else. And blueprints, just like with the application framework of a very, very powerful way of centralizing and standardizing how we build things because it's just the case of changing the blueprint once and then using our CI tool to upgrade everyone. And we can just change it in one place to to make it really make a shift in how we want to do things. So I'm just going to hand it over to Praveen. He's going to talk about pipelines as code. Hey. So continuing on from where Danny left off. So there's stuff that we looked at currently about the application framework and blueprints to really realize the power of it. What we had to do is create a portable solution which we can package into these blueprints. So that's when we started our journey to look into pipelines as code. So pipelines as code allows us to have reusable code blocks, which we can then package into our blueprints so that application teams that are using our infrastructure blueprints and our application blueprints get started with our continuous delivery platform, which is Spinnaker. So when we started looking at the pipelines as code, the main thing that we tried to solve is how do we get something which works with our current infrastructure, right? So at JP Morgan, we have multiple version control systems. So we've got Bitbucket and GitHub. So we needed a pipeline as code solution, which supports both of these things. So due to this, we went down the path of creating our own pipeline as code for our continuous delivery. And as part of this, we created our own version of pipeline as code, which has certain features. So it lets you have templating for your pipelines. So if teams wanted to have common functionality in their CD pipelines, they can do that. We've also added some functionality where users can remove redundant key value pairs. So as you would have seen, the JSON that you get in Spinnaker is quite huge. So what we're trying to do with this is reduce the size of that JSON so that teams can have a succinct deployment configuration in their repositories. What this will also allow us to do is package all these pipeline as code files as part of pipeline templates. So we're working towards having a template marketplace, which supports different deployment strategies so that when users come and select one of the blueprints, they can also pick a deployment strategy where they'll get that as part of their output. So another thing that we are looking to do with pipeline as code is we're going to have IDE integration. So we really want to get users writing CD pipelines in their IDE. So we're going to have IDE integration to get feedback back to the user as soon as possible so that we shift away from users defining code or defining the deployment pipelines in the Spinnaker UI. Long term, we actually want to get to a point where our language, our pipeline as code language is completely agnostic so that in the future, if we decide to use a different deployment tool or we try to look at things like managed delivery, it wouldn't change the user experience for our users. So moving on to the next piece, so how we actually achieved or take the first three things, blueprints, pipeline as code and really realize that power in JPMorgan is through our continuous delivery platform. So all our public cloud deployments at JPMorgan run through Spinnaker. So through Spinnaker over the last few years, we've been looking to improve this product in terms of the scalability of it. So we've migrated our product into the public cloud. So doing this uplift means that we're in a great position to scale Spinnaker as a platform. So as the firm moves towards more workloads in the public cloud, we can also scale Spinnaker up and have a fleet of Spinnakers as per business need. We've also been looking at the developer enablement quite a lot. So one of the key barriers for people when they need to adopt a new platform is how should I get started, right? How long does it take for me to move from my existing deployment pattern? So what we've done is we've really had to look at how do we, you know, improve the onboarding process for the platform. So through that work that we did, we've got our onboarding down to a matter of minutes. So teams just have to go to a UI and add Spinnaker as a tool on their project. So it's as simple as that. So that's really the work that we did in the background to make that happen. Really reduced a lot of developer friction. We've also been doing some work on the risk side to make sure that the platform is secure. So we've fully integrated with some of our identity systems within the bank to make sure that we use passwordless solutions for all our operations. So teams don't have to manage any functional accounts to do deployments. Everything is done through identity systems, which are integrated with us. We've also integrated with the firm wide change control systems and also the evidence service, which is something which allows us to make sure that anything that's being released to production has the right amount of evidence attached to it. So does it have the tests, does it have the unit tests, integration tests? Does it have any vulnerabilities? So only if those things pass, can you actually release production? So those kind of checks are within the platform itself. And what this essentially means is that we have reduced failure rate for production deployments because of these additional checks that we have in place. So that's our continuous delivery platform. And connecting this back to the previous sections that we spoke about, what the blueprint and Kickstarter integration allows us to do is every time a new application or an application is going through the modernization process, they will now be made to select Spinnaker as a deployment platform because we're integrated with them from the start. So yeah, back to the final part. So how do we actually get an option for Spinnaker? So at JP Morgan, we have a concept of cloud parties. So what we do in our cloud parties is we get the experts from all our product lines. So experts from our AWS team, our continuous integration team, databases, cybersecurity, networking, we get all the application teams or a cohort of the application teams that want to migrate to the public cloud. And we get everyone in a room for three to four days in an in-person event. And the goal of the event is just that everyone has set goals on what they want to achieve. So that might be moving a particular microservice to a cloud platform, or they might have already started the migration and they're looking at database migrations. So whatever the use case is, they come to these cloud parties and working with the experts from the product lines, they use our blueprints that we've looked at before and our tech primers to see how they can actually achieve it. So through the cloud parties, we can validate like our product lines are, you know, providing the developer experience that the developers need. And any of the feedback that we get from these cloud parties, because the experts are right there, we make sure that those are implemented as soon as possible. So the cloud parties have been a massive success and it really leads to like a multiplier effect from the teams. Because once we get all these different LOBs coming to our cloud parties, successful migrations for one of these teams or most of these teams leads to a multiplier effect within the firm. So they just learn a lot during these days and it's a great experience. I think that's all I wanted to cover. I think we'll go to questions. You indicated wanting to run like Spinnaker and like a multi-fleet, so multiple instances. Did you look at or how did you come to that conclusion as opposed to just scaling a single instance and running it like cross region or something like that? I think we spoke about this in my session yesterday about deploying Spinnaker with Spinnaker. So we've got like a lot of different LOBs at JP Morgan. So in terms of running Spinnaker at scale, in our efforts to run like Spinnaker in our private cloud platform last year, we ran into a lot of issues with especially cloud driver in terms of scaling. So we were reaching like limits where in terms of the threads I was using where we were hitting the limit and it was restarting which was affecting user pipelines. So and that was with a lot of horizontal scaling. So for us in terms of the size of migrations that we're actually attempting, we haven't even like scratched the surface. So I don't think we can just scale horizontally anymore. So that's why we've looked at fleet management for the future. How does control and platform and app development teams sort of share ownership of these pipelines? So in terms of the ownership of the pipelines, right? So we provide all the tools for the teams to get their pipelines in Spinnaker and then it's still up to the teams to execute those pipelines to deploy their workloads to their environments. So the platform itself from a controls perspective, we have open policies in place where we check like you have the right change control, you have the right evidence, the thing you're releasing or you're pulling from a container registry which is approved, right? You're not pulling from the internet. So we only check like those major things and we have reviews on a regular basis with our security teams to make sure that all the security is in place. So we kind of interface with the security team and make sure that we have the controls in place. But from the user's perspective, they own the pipelines. Any other questions? Five minutes. Any questions from the back? So the cloud parties is sort of implying to me that it's not a top-down directive that you have to use Spinnaker, right? You're sort of building a platform and trying to encourage people to use it. Is that correct? Yeah, so we've got like a mix, right? So it is a top-down directive as in like for public cloud deployments. We do say teams should be using Spinnaker, but we give teams like some time before we turn off the legacy support. So we give them a few months and they have a period of time where we give them all the tools that they need to migrate. So in the past, we've written different migration visits where they plug in their existing configurations and we spread out the equivalent Spinnaker pipelines. We're also working on open rewrite recipes to like convert these existing configurations, right? So now that we have pipeline as code, we can just do that in code instead of having rich user visits. So in terms of migration, so we tend to give teams a bit of time and we give them a heads up on when we're actually going to stop supporting the other platform. So there is a like top-down approach like it's always communicated to the teams. You have X amount of months to get off a certain platform and then from our perspective, we just make sure we give the teams everything they need to make that happen. All right, time for one more question. Can you just talk about monitoring and auditing of your individual environments? How do you, when a pipeline fails, how do you propagate that to the respective teams? So you're talking about a Spinnaker pipeline for a particular team is failing. So they would get that feedback right on the pipeline itself that the reason that it failed. So we've had a bit of a challenge explaining all four uses to kind of figure out what the error in the pipeline is. I think this goes back to a previous talk from Salesforce that some of the errors in Spinnaker, it's not easy to debug. So especially from our firm's perspective, right, where we're trying to migrate people to using Spinnaker, from let's say they're in Kubernetes and they come over to Spinnaker and they get a Kubernetes or if they're moving to Kubernetes for the first time. So there's that learning experience of moving to Kubernetes as well as adopting a new tool. So it can get tricky for users to kind of understand where the error is. So what we try and do for our like users is an engineer's channel. As Danny suggested, we've got a troubleshooting page. So anytime we see like a common error with like Kubernetes or even if it's a user error, we'll put it there. We also use our like internal, we have our internal stack overflow. So anytime there's an issue, which we think might be repeated again, we'll just make sure it's on that stack overflow slide so that users, that's the first thing most developers do check that. And if they can find the solution there, then yeah, they don't have to really take it with us. We're also working on building our internal knowledge base. So just shift left a bit so users don't have to find the problem and try to result with themselves. And we want to take those error signatures. And before Spinnaker shows that red error box of doom, would it intercept past the error signature, get the recommended resolution and then inject that into the UI. And that's going to help address our potential support burden on our own team. And in terms of observability, we've got a very rich dataset with a large evidence store. We know exactly what everyone is doing, what they're deploying, what environment, what artifacts and what infrastructure they're deploying to. So we have all sorts of Grafano dashboards that tells us our adoption rates and pipeline success rate and things like that. So for example, if our pipeline success rate drops in production, then we know that we're not hitting our SLAs right. So it's something that we've been working on for the past several months and to be to really understand what our customers are doing and also to track the migration to public cloud. All right, so we'll be around for a few more hours. So come chat with us to learn more about how we do CD at Chapin Morgan. And thanks for your time.