 Hello, everyone. I'm Zachary Zaring, and I'm here to present behind-the-curtain multi-cluster backstage deployments with GitOps, Crossplane, and more. Quick intro, I'm a senior software engineer at Grafana Labs on the platform team, and there's a link to my GitHub. So let's get to it. Show of hands, who likes Legos? Nice. So building a deployment pipeline is a lot like building a BYOB Lego set, but here it's bring your own bricks. Today, we're gonna look at how we at Grafana Labs deploy and manage our multi-cluster backstage instances and try to help you wrangle your deployments using this framework. So what does that look like? Here's a quick overview of the stages or layers of the pipeline. We'll step through each layer and break down the bricks or components involved. Note, we'll be focusing on Kubernetes as our workload orchestrator, but I imagine these principles would be applicable to other platforms. Also, since this is a lightning talk, we're gonna go quick. A lot of these topics and tools could span full talks themselves, so we'll just focus on the too long and didn't read versions. First, we start with our repos and version control system. On the left, we have the building blocks, and on the right, we have our implementation of each of those. Obviously, you'll need a VCS to store and version your code. Then we have three repos, the custom backstage app, configuration code, and manifest. At Grafana Labs, we use GitHub for repo management, and you could see our three corresponding repos. Next, we have the continuous integration layer. This is composed of two bricks, a CI platform and a CI workflow for the backstage repository. The base of the workflow is simple. Build, test, push for pushing images to a registry, and submit for submitting continuous deployment workflow. We'll get more on that later. It's important to remember the backstage repo is meant to be collaborative between many teams, so CI is very important to ensure the main branch is always deployable. A merge to main results in a built image and will submit a workflow to deploy the new tagged image. In other words, it triggers continuous deployment. At Grafana, we use GitHub Actions as our CI system and have an associated GitHub workflow for backstage. A simple step submits an Argo workflow. Argo workflow, a workflow orchestrated tool to start the deployment process for the newly minted tagged image. Next, we have the configuration as code, which is the critical layer for wrangling multi-cluster deployments. We need code for generating the infrastructure supporting our backstage application and code for generating Kubernetes resources that leverage that infrastructure. These code bases should generate all definitions and resources backstage needs. We use JSONet to model our deployments. In short, JSONet is a data templating language that combined with some powerful tooling and libraries gives the ability to generate complex and varied YAML manifest for multiple clusters. I won't go too deep into cross-plane either, but essentially it allows us to find infrastructure abstractions as custom resource definitions so we can treat persistent infrastructure the same as Kubernetes resources. With JSONet, compared with cross-plane, we were able to simplify all our as code into one source and language, so no more terraform. We use cross-plane to define buckets for tech docs, define DNS mapping for our container native load balancing for ingresses on GKE, and define Postgres SQL instances, including configuring the database, user accounts, and permissions. Since all of this is defined in JSONet, we can easily reference objects and provide tighter relationship between deployment and cloud resources. So that brings us to the continuous deployment layer, where we need to take those as code sources and turn them into real Kubernetes and infrastructure resources YAML manifests. So we'll need a brick for exporting those manifests, a brick for taking your infrastructure declarations and generating them, if not done at the reconciliation level, which we'll cover next. And a method for handling updates for a new version of the app. Note that we don't have the infrastructure automation block on the right. That's because of our use of cross-plane, so we don't need to explicitly automate infrastructure and we can rely on cross-plane abstractions and providers. So we use TANKA to take JSONet code and export manifest to our Kubernetes manifest repository, which includes cross-plane CRDs to manage and provision cloud resources. And finally, we have an Argo workflows to handle progressively rolling out new images through our Dev and prod clusters. Health checks for backstage are critical for rollouts as they let us know if it's safe to continue the rollout and to put our rollouts on guardrails. Finally, we have the reconciliation layer. This is comprised of a GitOps brick and an infrastructure orchestrator brick, which is optional if you're provisioning your infrastructure earlier in the process at the infrastructure automation stage on the last side. We use Flux as our GitOps tool and it uses Kubernetes manifest repository as a source. Then we have cross-plane comprised of multiple providers which read in the CRDs that have been applied by Flux and manage those cloud resources. In summary, Flux reconciles Kate's manifest and cross-plane reconciles cloud infrastructure resources declaratively. So let's see all the bricks connected together. This is the complete flow of the pipeline. Here we can see how we go from source code to running workloads in clusters. Changes to the configuration flow through the pipeline and are propagated to running workload configurations. Some additional, a couple of additional takeaways we had while deploying backstage. It's important to keep it simple and use the path of least resistance when it comes to your platform. Spoiler, backstage is deployed like a lot of other non-trivial apps. So keeping it in line with your existing pipeline really helps in maintainability and understandability from your engineering teams maintaining backstage. Also cross-plane abstractions are really great. We have a really refined JSONet tank deployment pipeline but had an awkward, not very well integrated step with Terraform to handle infrastructure for applications. With cross-plane, we could leverage that refined pipeline and have a closer relationship between our deployment code and infrastructure. And thank you, thanks for listening. There's my email for any questions or areas of improvement on our processes. Here at Grafana Labs, we're just getting started with backstage. So really eager to learn more and see what works for y'all. I don't know if there's questions. Yeah, folks, there are questions. Yeah, I guess it's dangerous spots before lunch. So yeah, totally understand if people wanna grab some snacks. But yeah, just in case anybody has questions. Ah, there's one. Hi, you used Flux V2 but used Argo workflows as well. Why would you use Flux instead of Argo CD? That seems like a cool choice. Yeah, so we had Flux beforehand. So it was just like, we needed a tool to orchestrate workflows and Argo workflows is a great tool for that. So if we see a nice benefit that's worth the extra engineering effort to migrate to Argo CD, we would totally use that. But it's just how platforms are built. Any other questions? So enjoy lunch. Thank you. Thank you, Zachary.