 Say it, say it or not. All right, Rebecca, get started. We're at the about time, and as you know, we may run over, so all right. All right, our presentation's called Hitting the Runway with Argo CD at American Airlines. I'm Carl Hayworth. I started my career performing statistics of IT systems and processes looking for optimizations and efficiencies. After, I got Cisco certified and worked as a network engineer in our tooling, automation, administration departments. A bit later, I found our developer experience group and was able to make my way in as a developer. I also had the opportunity to develop our initial offerings for our Kubernetes shared enterprise platform as well. After a while, I became a product technical lead and now an architect over our developer experience and DevOps products, and I'm able to work with a lot of teams all across American Airlines. Yeah, and my name's Christian Hernandez. I am the head of community over at Acuity. I do all the open source C things at Acuity, including being part of the Argo project and also the cargo project as well. So I am a maintainer of OpenGetOps, so which relates to Argo CD a lot. As you would imagine, you can find out more information at OpenGetOps.dev there. And I'm a member of the Argo project and also SIG marketing for the Argo project, which puts on events like this. So ArgoCon and various things in the community there, so you can follow me on the socials there. So what I'm going to talk about, I'm going to start off, talking about Argo CD as the cornerstone of platform engineering. And so where I want to start is with a little bit of backstory of OpenGetOps. So OpenGetOps was a lot of us in the industry, from the Argo community and also from the Flux community and various other companies like AWS, Red Hat, GitHub, Microsoft, I'm probably missing a bunch, CodeFresh, and got together and talked about, okay, what is actually GetOps? What does it mean to do GetOps? And we came up with kind of some guiding principles of GetOps, things like it needs to be declarative, versioned and immutable, it needs to be pulled automatically, continuously reconciled, you can, again, read all about it at OpenGetOps.dev. But what was interesting enough when we were coming up with these principles and talking with people is that some of the stuff that we were talking about with GetOps were things we were already doing, some if not all of it, and which is kind of funny, and I actually had a conversation with Dan who's standing right there, it's like, oh, GetOps is kind of cool because it's like things I've always been wanting to do or things that we were trying to do and now we're, so what's the difference, right? Why? Why put a name to GetOps? Like why now? So, and the reason for that was because of just Kubernetes, right? I think that's like, Kubernetes and containerization is definitely a game changer and it gave us an immutable format, it gave us orchestration, it allowed us to do things in a declarative way. And so really like the groundwork for GetOps and the GetOps principles was really Kubernetes and people trying to operationalize Kubernetes in a way. So, and that kind of makes me think about platform engineering in general, right? So first off, that was great marketing. I can't even, I even forget the name of the company that did that, that's how you know your marketing did so well that people just like completely forgot about your company because now they're just attached to the platform engineering thing, right? But it kind of made me think about when we were talking about GetOps a few years ago that now platform engineering blew up. It's kind of similar, right? Like a lot of us one time or another tried to build an IDP, right? Try to build a platform, try to build some sort of self, you know, I know I have for, you know, you hammer together a bunch of bash scripts, you're doing things with Terraform, things with Ansible, you're trying to get some sort of platform there. So it's kind of similar to where, well, what's platform engineering? Well, like now we have a name for it, right? Now we have a name for the things we were trying to do this whole time and things that we have been doing this whole time now we actually have a name for it. And, you know, as much as I like that marketing, I don't think the DevOps is dead, right? It's someone, and I forget who said this, so if it rings familiar, please catch me and let me know, because I do want to give credit, but someone said that platform engineering is just DevOps with a product mindset. And so, so now like why, the same with GetOps, like, well, why now? Like what led us to be able to do GetOps? Kind of similar to platform engineering, like what led us to be able to do platform engineering and really is backstage, right? So kind of similar story is like with Kubernetes and GetOps is like kind of like backstage and platform engineering where now we have a platform that enables us to do the things we've always been wanting to do. So if you go to the next slide, thank you. It's now we are, now we have Argo CD as a cornerstone of platform engineering. And really Argo CD has become a core component of building these IDPs. And it's kind of cool, because like we're finally abstracting away a lot of the operational things that we've been needing to do with Argo CD. Now we got, we abstracted Kubernetes away using Argo CD. Now we're abstracting the management of Argo CD away. Now we have this platform engineering thing, right? And it kind of becomes boring in a good way to use Argo CD via this method, because like now you have the reliability to manage Kubernetes, right? You have Kubernetes, you have Argo CD managing that and you have GetOps as a operational framework around managing this infrastructure at scale. And it really becomes a cornerstone and like you really almost tend to forget about it. And that's a good thing. A lot of us, I know I spent so many countless hours trying to like manage and deploy applications at scale and not only just managing the applications, but also the platform that manages the applications. So for it to be boring is actually a good thing. So I kind of wanted to start off with that and kind of like set the stage here for a great story from American Airlines and how they're utilizing Argo CD, so thank you. All right, we'll get started with a quick history of Argo CD in backstage at American Airlines and then show how we're utilizing it. A while back, our coaching organization found a repetitive problem where teams would come into our hangar for a coaching exercise and the first week or two would be stamping out a new application repository. Kind of boring work. The team started out by making a few composable templates and breaking them out so they could be used by the Yeoman generator to speed up the first week or two. While this was an improvement, there was still a lot to be desired. Right as COVID was swinging into full gear and shutting down the world, this article was shared with our emerging technologies team and it created a few fans internally. While the coaching organization was still figuring out their shift to virtualized coaching, two members of the team broke off and started a backstage instance at American. Within a week, work started with the Hack Week demo from Spotify and the team cheered as a hello world like page emerged. We saw the vision for backstage and we knew we wanted to be part of it. Now you must be wondering if you took a wrong turn and made it to backstage con. Nope, you're still at ArgoCon and there's a point to talking about developer experience platforms. I mentioned runway in the title of my talk so let's first start with what runway is. Looking at the American Airlines name, you might be thinking about this runway. However, I'm here to talk about a completely different runway. Runway is our internal developer platform. We wanted to make a safe environment for our developers to be able to deliver functionality for their applications to our consumers at a sooner pace. An extensible platform and ecosystem to build on top of. Developing applications at a large enterprise can be difficult. You've got a tech radar to follow. Whoops, lots of platforms to integrate with and in some cases if you're lucky, tons of documentation. Hopefully it's not out of date. We wanted to make things easier by abstracting away the hard parts. Our first goals were around launching a new application to our cloud partners and then launching an existing application. We started with app services but we just didn't see this as ideal with deploying many different app services to different subscriptions and resource groups. So we ventured down the path of Kubernetes and purpose built a set of enterprise shared Kubernetes clusters that exist today with namespaces of service. We've also had lots of success with teams intersourcing contributions from around AIT to create new plugins to integrate with new backend systems such as VMware platforms, data platforms and more. We didn't want our engineers to have to jump through a bunch of different platforms to get their work accomplished. As a side note, we talked about additional plugins in our runway docs and here in the tech docs we show how easy it is to make contributions. We even created a standalone template for plugins which integrates in our internal registry. We simply pull their plugins on build and we make intersource magic happen that way. Now moving back to the point of deploying apps you can see a sample of all the templates we have here. A core part of our templates are around Kubernetes and deploying applications. However, there's a few up on the screen that don't deploy and we're gonna ignore those ones for now. Before runway we were told it could take months for an application team to get their application to the cloud. Now we've simplified the process under 20 minutes with builds and security included. Create can either be a new application repository it can be a core request to your favorite security tools or deployment manifest or something other than a repository as well. Here's the next JS template that deploys to Kubernetes. Without even knowing it our developers get their application deployed through GitOps and more specifically Argo CD abstracted by runway and backstage. By clicking the details button on a template our users can see all the battery included items they get out of the box through our ecosystem. Developers no longer have to fill out service tickets or copy from outdated templates. For global traffic management our users can simply deploy to multiple regions and then our GTM automation in the background will automatically be able to determine where the application is and direct traffic to wherever it needs to go. Image pull secrets are included and rotated and our internal registries are utilized. Logging and monitoring is included our enterprise pipelines are included and our corporate network is plugged in and ready to go out of the box. So many items that developers no longer have to fret over and we even correlate these items to security best practices to showcase how users are following the best standards and getting business value as well. Now we talked about new application deployments. Users can also deploy existing applications through runway as well. Our users are encouraged to use our multiple abstraction layers in this case custom resource definitions but this template works for home charts and Kubernetes customized resources as well. Behind the scenes Argo CD gets to work on deploying any of our deployment focused templates. We attempt to use relational data in the background to require as few fields as possible from our users. The fields on this page dictate which instance an application is going to be deployed to with Argo. Our resource tags are pulled from other backend systems as well to keep our finance and asset friends happy and we lock down namespaces and clusters based on data provided by Rancher. After filling out a few short fields our users get a GitHub repo and the catalog entry. All of our users start in a non-production environment however later in the catalog they can promote to different environments as well using Argo. Here a non-production Argo application was created and before the Docker image is even produced by our GitHub workflows Argo is already attempting to pull the image in anticipation of it being produced. I've got a short video here that actually goes through the entire process and you can see it's a little wonky on the screen right now but I picked some administration information ownership and naming details. I'm now selecting a cluster, entering in a container name and then finally entering in a namespace as well. Once I hit next step and then create we'll go ahead and do the necessary tasks including calling the Argo API to create a new application and project. Just like that we have two links on the bottom for a repository and a catalog. On the last screen you saw a link to our catalog. Here teams can manage their GitOps deployment powered by Argo CD. Teams can deploy to another region making active-active configurations simple and our GTM automation in the background takes care of the rest. Application owners can even update their Argo application definitions if their repos or their paths change. The front end Argo plugin shows an overview of the instances I'm deployed to. Our cluster strategy at American Airlines is fluid and cluster that an application is deployed to today might not be the same cluster it's deployed to tomorrow. This plugin makes it easy for application owners to quickly see where their application is located at if needed along with a side pane if you need some more details. When I'm ready to decommission my application as a good developer I can go ahead and delete my Argo powered deployment and there's also options to clean up my catalog entry and my repository as well. These capabilities started with a contributor named Rody who provides Backstage SAS. The original plugin was a front end only plugin utilizing the Backstage proxy. However, due to our skill and our patterns we desired a little more. So we partnered with Rody to create a back end Argo plugin to easily find applications among as many Argo instances as you have. We then sought the capability of being able to create applications from the Backstage Scaffolder as well. So we built that capability and contributed back to the community which you previously saw. Due to the contributions we've made with the Scaffolder plugin, template owners can empower their users with Argo deployments and it only takes a few steps. The above will create an Argo project and an application and get it ready for deployment. Now I've talked a lot about standard Argo use cases but there's two other use cases I wanna talk about as well. While you can barely see it we also use Argo CD for our GitHub whole request environments as well. Once an image is produced from our CI process on GitHub the environment comes to life and our developers can see their work live in unique URLs. Here you can see multiple instances of our developer experience platform and users can easily register with this catalog or with this functionality through our catalog. Originally when we built the enterprise shared Kubernetes clusters every single cluster component item was installed using GitHub Actions. The team along with collaboration with an outside Kubernetes expert created some improvements. Now all of our components are installed using Argo CD. This simplified our workflows and ensured we had a single source of truth for all of our cluster bootstrapping information. Our workflows now install Argo CD with the app of apps definition and we're mostly done. We leave it up to Argo CD for the rest. GitOps allows us to control and manage our application deployment with a single source of truth no matter how many times it's deployed. There are no continuous deployment pipelines and no secrets to power those workflows either. Whether we have 50 clusters to keep bootstrap information in sync an application deployed to multiple regions or clouds or a single application in a single location Argo CD helps and with the abstraction layers our developers barely know it's there. Using a developer experience platform allows the underlying platforms to shine while still abstracting them. Argo CD is great at what it does and many of our developers again have never even seen it. Our developers can just focus on their feature delivery and leave the hard parts to our platform engineering teams. Thank you. And I think we've got time for a few questions as well if there are any. Start queuing up behind the mic there. Thank you for the talk is really good. I have two questions actually. The first one is how many instances do you use and to visualize that in backstage and you are creating the applications in the API? Are you storing that back into Git or how are you managing those applications that get created? What was the first question I couldn't hear that. For backstage are you using multiple instances of Argo CD to visualize because one cluster has like an application. Got it, thank you. So currently due to the way that we originally implemented things we have one Argo instance per cluster. So every cluster that's listed inside our backstage instance also has a paired Argo CD instance as well. And then the second question was around... Oh, yeah, he's coming back. Yeah, you're creating the applications through the API. Through the API. Are you storing those in Git afterwards or it just lives in the... Right now we're not storing them to Git afterwards. We do take backups of all of the items that have been deployed. So we do have copies of that but we're not storing them to GitHub right now. That's a good suggestion though. Any other questions? Kind of curious, you mentioned the abstraction layer. So are your actual developers then sort of unaware of Kubernetes itself? Like how much interaction with K8s are they doing? Are they doing it all through your backstage interface? Yeah, great question. So the only entry point to get to our enterprise shared Kubernetes clusters is through our runway platform. So it is heavily abstracted. On top of that we have a custom resource definition as well with a custom operator. It's called a web app YAML. And in about 10 lines of YAML a developer can deploy their application without even knowing Kubernetes. So instead of having to fill out many different types of manifest, deployment services, ingresses, HPAs they can fill out one resource type and we know kind of what the best standards are for American Airlines. And a lot of our users don't actually know Kubernetes. One of our users actually didn't know how to containerize or dockerize. We helped them out and they got on their way and they barely know Kubernetes. How do you handle blue, green or candidate deployments and what configurations happen on the ingress and the ingress controller or the proxy that you are using? Good question. So right now we have not implemented different deployment strategies, blue, green or canary or anything like that. And I think that covered the second part of your question then as far as ingress flow. Are you making any special configurations on the ingress controllers to tell to inform them that a new version of the application has come? Thank you for that. So our ingress operator as we call it watches for any new ingresses to come up or changes or deletions of ingress resources. And then that operator will communicate to our global traffic management microservice and then it does any adjustments on our CDN and security provider layers. So through runway we do have various troubleshooting tools. It also shows all the Kubernetes pods as well. However, even though we're abstracting away all the layers, those power users who still know Kubernetes can still use kubectl or can use any of the management interfaces as well. And then we have a full Kubernetes platform service team as well sitting here in the audience who takes care of any user issues that the user can't take care of themselves. The provisioning of the Git repository when you create an application from backstage, right? Is it always the approach of there's one Git repo for the whole app or is there ever an approach where an application could be comprised of a Git repo for this service, a Git repo for that service? If you're building different microservices then of course you could have it for the different services themselves. We've seen some teams that will separate their deployment manifest repo from their actual application repo as well. So there's a few different patterns, although through the runway platform, the main pattern is that when you click on a template, you're in most cases getting a new repo. However, we can also do pull requests to existing repos as well. And that's something that we're digging more into now. Thank you. The certified? Great question. So the certified label means that the template is owned and maintained by our team and we're guaranteeing with a certain level of certainty that the template will work, the workflows will work, it'll be able to deploy, you're not gonna have issues. Now with backstage, anyone in our community can contribute templates and by default, they're not certified. So only templates owned by our team and our process are certified. As far as versioning goes, we're still a little early in our versioning strategy, haven't quite completely solidified that yet. But that's something that we're getting to now is the different versions of the templates and then our users will be able to do a diff between the templates to figure out what they're missing and might need to add. Thank you for that. Go ahead. So the templates for backstage can include a little bit of everything. In this case, the templates that you saw, it was deploying a Python Fast API app. So it was Python Fast API boiler template, just kind of a hello world to get the users started, get something launched out there and then it also contained the Kubernetes manifest or custom resource definitions as well. So and they also include pipelines and everything you need to be successful to get going. So we have HashiCorp Vault at our company and we utilize HashiCorp Vault. Any other questions? All right, looks like we're out of time too. Thank you very much. Thank you.