 Hi everyone. I'm Shaperuppa. I work on Project Carvel where our goal is to provide a set of reliable tools that help you automate and run your software in production. And today I want to share a real-world story about a platform operations team that's been able to manage over 30 clusters and several applications in production in a highly regulated environment using the Carvel toolchain. But before that I cannot contain my excitement and I have to share that Project Carvel was just accepted as a sandbox project in the CNCF and I am so happy to be here today saying that. Yeah. Okay. Now back to the story. So let me talk a little bit about the challenges that this team faced on their production journey. So first their applications have complex deployment typologies. So installing these applications by following a playbook of scripts was toil heavy. It was error prone and their development teams really didn't like that experience. The development teams didn't have the right level of access to run these scripts. They didn't know where to run the scripts from and managing fine-grained access was hard for the platform operations team. As a lot of us today do, they also depended on a lot of open source software on third-party software and they needed to have a way to ensure that the software was built securely. It was easily accessible in their various environments and it was easily customizable. Security, compliance, reliability is really important to this team. So this team knew that the least error prone and the most user-friendly way to satisfy these criteria was to do so programmatically. They knew they wanted to adopt the GitOps mindset for managing their clusters and what really resonated with them about the GitOps mindset was this idea of deploying continuously, declaratively, and doing so collaboratively as an organization. So what did they do? The team first started to keep configuration in a central Git repository. This repository became a central point of collaboration for the entire organization. They could source changes to this repo from various teams. Then they started relying on Carval's package manager, CapController, to create clusters following this configuration. Now they could make a single change in their configuration and CapController would automatically roll it out to their many clusters. Previously, updates of this kind took days of coordinated effort across multiple development teams, but now they were able to update their applications and their clusters in dev, test, and prod in minutes. They could also automatically provision production like clusters in dev, which was really helpful for their testing. The next key thing for them was continuous reconciliation. So Carval's CapController provided much better accountability when rolling out centralized configuration than a playbook-driven approach used to provide for their development teams. Using CapController, they were guaranteed that the cluster converged with the configuration every 10 minutes. This prevented configuration drift and eliminated the problem of snowflake clusters for them. It also meant that they could guarantee that their deployments were compliant with the declared policies. Then they started bundling their applications configuration, its Kubernetes manifests, and its dependencies in a single immutable OCI artifact using a Carval tool called Image Package. This OCI artifact could now be signed, it could be relocated, and it could be referenced by its unique digest. And that's how they are today able to relocate their apps and their dependencies to clusters, including to their edge clusters. And finally, they were able to use Carval's YAML wrangling tool YTT to write overlays for third-party software. The overlays could be checked into Git, and they were integrated into their GitOps deployment system that was powered by CapController, as we talked about earlier. So this empowered their developers to leverage that centralized configuration, but also provided them the flexibility they needed through environment-specific overlays. So with this, the platform operations team were really able to enable the development team to provision new clusters with common software like CertManager, Prometheus, Fluentbit, ExternalDNS, and other commonly used things, but with approved versions, approved configurations in a matter of minutes. And they were also able to keep these clusters upgraded. They upgrade multiple times a month to stay on top of CVEs and so on. And it's been awesome for us on Project Carval to support this platform operations team, to have them be part of the community, to co-develop with them. And they've been a big part of our journey to CNCF. So if you want to manage your production software on Kubernetes securely, reliably, and with automation built on top of native Kubernetes APIs, following GitOps principles, then come join us on Project Carval. You can find us on Carval.dev. You can find us on the Kubernetes Slack. We're super eager to build in the open with you, super eager to see you leverage it in various CNCF projects as building blocks for your own projects. The team and I will be at the VMware booth later today, tomorrow, and we'll have lots of talks and demos and so on there. So hope to see you there. Hope you have a fun KubeCon and hope we continue this conversation. Thank you, everyone.