 testing. Welcome everybody. Glad all of you are joining us here. Welcome to our talk on building a platform engineering fabric with the Kubernetes API at Autodesk. So over the past couple of years we've been on a journey to transition our developer platform away from a set of organically grown, tightly coupled tools with extensive configuration and libraries into declarative API-first platform with Kubernetes API at its core. This is an area with lots of lots of interest across the industry. So our hope here is to convey some of the challenges, some of the wins in our general experiences during this journey in order to help others be successful at a similar journey. Before getting in our talk, let us briefly introduce ourselves. Jesse, would you like to? Everybody, you found us. We're at the end of the hallway. I'm Jesse. I'm a senior principal software engineer here at Autodesk. And I focus on the juncture of our platform engineering and software supply chain security initiatives. And when I'm not in front of a computer, I like to sail and backcountry ski and pretty much do anything outdoors. Thanks, Jesse. So I'm Greg Haynes. I'm a software architect and I'm also the open-source lead at Autodesk. I focus primarily on internal developer platform software. And my background before that has been primarily in open-source cloud platform development. So I spent about 10 years doing that before coming to Autodesk. And now I'm excited to get back to it. So today we're going to start off with an overview of where our developer platform was at the beginning of this journey and how we got there. Next, I'll talk about our North Star or what motivations led us to this major effort. And then we'll cover some of the alternatives considered as part of planning out how we'd accomplish this. Afterward, we'll cover a couple of specific technical challenges we encountered and how we solve them. And to wrap up, we'll hopefully impart some big picture wisdom with you and some lessons learned. Now let's begin the story of our developer platform. So this is a very high-level view. And from this angle, I expect it should be pretty familiar to all of you. This is an early rudimentary version of our developer platform. At the time, it's primarily focused on simple app delivery. At the bottom, we have our user who primarily interacts with two services, GitHub for source control and Jenkins for CI CD. The user supplies some application configuration via Git, which is then executed by Jenkins and eventually result in artifact generation, such as Docker images and simple CI CD processes around them. Over the past 10 years, this seems to be sort of the industry standard starting point for application delivery. And we were no different at the time. So over time, our CD needs diverged from our CI needs. And just as many others have experienced, we needed some complex functionality to implement promotion workflows, autoscaling, deployment strategies. To solve this, we adopted Spinnaker to manage our CD processes and kept Jenkins owning CI. Regardless of the specific tool choice, we now ended up with another user action. The separate CD tools, Spinnaker in this case, which now has an even greater depth of functionality than our Jenkins CI infrastructure. Then shortly afterwards, the scope of our developer platform grew beyond application delivery. We needed to provide compliant infrastructure management, cloud accounts and networking, and generally wanted to own the increasing functionality that our engineering teams needed in order to ship reliable software quickly. Now we have the suite of in-house tools and libraries which integrate with Spinnaker and our other CD tools and that our users also need to interact with. The result is our users and our CI CD platform now have this in-by-in integration and also this in-by-in interaction matrix. So users are responsible for setting up accounts, access and providing application info via the suite of in-house tools, which is sort of at the bottom right there. And then copying that information to configure their CI and CD pipelines to correctly integrate their application with these tools. So think copying state into GitHub and then Jenkins and Spinnaker consuming that state. Essentially what ends up happening is we've evolved our platform to scale our development abilities. But doing so, it really feels like we've pushed a complexity and integration cost onto our users. And they let us know it. We're not happy with it. They're not happy with it. So we set out to solve that. And so while there are some specific ways we could have solved just that problem, if you also step back a bit, there were some larger problems that we also started to realize. Obviously the high number of different user interaction points required for common workflows was that I was mentioning, but compounding this is we also have our own application configuration interface, which is a flat file in Git, or actually it's a set of flat files in Git. So this is yet another interface for users to become familiar with. And maybe this is some foreshadowing that there's an opportunity to consolidate this. Another separate issue is the integration cost. So in these cases where we try to improve our user experience by creating integrated workflows across tools, we forever pay a tax whenever any of these tools change. A great example is the developer portal, which ends up being the service that integrates with all other user facing services in the platform. Imagine it as a single dashboard for discovering your application information. How would you implement that this in the system? Are you going to go and write integrations with every other service, all of our in-house tools? Even if we were able to do that, what would the maintenance cost be for that over the long term? Would it probably massively hit our velocity? So finally, most, and honestly is probably most important, is that there's sort of a sense that we might need to rethink what our dev platform is and the product that we're making. And what I mean by that is, are we giving our users tools so that they can own triggering their own deployments? Or are we actually giving them interfaces so they can describe to us what the deployment should look like and we own it? An example why this might matter is what happens when you want to take greater ownership of the regionalization of your services. You know, if they're responsible for triggering deployment, is a new region rollout or a modification to what regions you're supplying a fire drill for every one of your engineering teams? How else can you scale that is, I guess, the question. So with that, I'll hand it over to Jesse to talk about our North Star. Thanks, Greg. So why would we want to replatform on the Kube API? I mean, we already have systems that are building our infrastructure and delivering our apps in a self-service manner, abiding our compliance requirements, right? Couldn't we just optimize our developer workflows on our existing tooling? And so this is where we need to pause and look out into the future a little bit. And let's gaze at that North Star. Let's imagine a magical place where we have things like engineering workflows that are actually enjoyable developer experiences and separation of concerns through an ownership model that actually includes our security and compliance teams. Verifiable cryptographic signatures on packages for pre-deploy governance and isolation through well-understood RBAC and least privilege account access. Reduction of duplication through code reuse and compositions that are obvious. Increased collaboration and easy discoverability. And low overhead for contribution through well-patterned extension points. And finally, consistent interfaces for extension and interoperability. With that vision in mind, let's take a look at what we can get from the Cates API and the operator pattern. Capabilities can now be represented through their declarative interfaces as operators, which means that we can now have continuous reconciliation on any of the resources that they manage. We also have patterns for integrating external off cluster resources through the help of tooling like Crossplain. And this enables a clear separation of concerns between our core platform team and the teams that are building on top of the platform. It also means that these platform capabilities can develop airtight abstractions without leaking the details of their backing services. And that, of course, will enable our inner source and our teams will be unlocked and decoupled from one another, able to control their own release cycles for their capabilities. The CUBE API and the KRM come with strong API semantics, so we get built-in validation, versioning and conversion mechanics. And we can also make use of RBAC and namespacing to restrict users to their respective domains. And we can make use of admission by hooks to further limit what can take actions that can be taken on the cluster ensuring our security and compliance. And finally, if we're able to funnel all of these resources through the central management plane, we essentially get a configuration management database or inventory management database for free. And it does sound magical, right, in theory. So what does this look like? Here you'll see we have this top-level capability for a software lifecycle management. It's the entry point. It could be Spinnaker or any other tooling that can manage the declarative source of truth inside the CUBE API. Kubernetes reports on that state and allows for policy to be defined through the use of admission controllers like Gatekeeper. And resource managers are implemented as operators and can wrap off cluster resources like those available from common cloud providers. However, even though the underlying providers may be very diverse, these interfaces are now very familiar. Additionally, we have a common bus by which these capabilities can be discovered and consumed. For some of you, this may start to sound familiar, building wrappers for services in a common, well understood integration framework. It sure sounds a lot like service-oriented architectures to me. And like the transition to SOA from point solution integrations that happened in the early 2000s, we are asking our folks to organize around a common pattern for integration and to decouple their services. Effectively, we're moving away from something like the spaghetti on the left to something along the lines of the hub and spoke bottle on the right. And in this world, we can imagine the Kube API fitting in the middle where, amongst many other things, it can take the place of a traditional enterprise service bus, brokering communication between these services. But if you've ever used ESBs in an enterprise, you know how hard it could be to actually make them successful. They were really hard to fund at scale through entire organizations, especially multi-nationals with continuous M&A. And if you're not careful, you can end up in a situation where the ESB is actually a limiting factor for how quickly your teams can release. And this can be because there's numerous issues from scaling to mandatory library upgrades to coordinated maintenance windows and so on and so forth. And these cross-organizational synchronization points are hard to manage and vary in a nightmare to govern. And pretty quickly, with all the hacks and workarounds, you end up with that spaghetti again. However, unlike the SOAs of the early 21st century, the Kube API is lightweight. It doesn't require the services themselves to run on its data plane. The integrations can be maintained by independent teams as long as they have the skills to manage their own operators. And as we said earlier, they're not locked into the coordinated release cycles or dependency upgrades. So if everyone is building and offering and consuming these capabilities independent of one another, how do we keep control of this thing? That's where good old governance comes in. Platform teams, just like any other, are limited by their people power. In order to make sure that what is offered is maintainable and trustable by our users, we need to limit the capabilities that they take on. However, that doesn't mean that we can't allow for inner sourcing or even completely community-driven capabilities to be offered as a part of the ecosystem. Capability owners can, you know, outside of the core platform team, they'll need to abide certain guidelines to have their capabilities made available as a part of this platform offering. And additionally, there may need to be some amount of security or compliance for your applied to these capabilities in certain high-trust environments. So there needs to be some sort of graduation process to get these capabilities reviewed and into these inner spheres of that platform offering. An explicit governance should guide this. Furthermore, if we think about the life of a capability on the platform, it takes on the curve of a normal product, right? Capabilities come and hopefully they see adoption and if they're popular, they will grow in adoption until the capabilities are mature and then eventually age out of use. Therefore, we're likely in a way of culling capabilities out of that stable and maintain inter-core over time. And having explicit governance here will play in droves later. So ultimately, how do we get these capabilities in the hands of our end users? Well, that's where our work starts to bump up with the other initiatives we have at Autodesk. Just like many of our peers, Autodesk is working to create an internal developer platform for all of our product teams. And now that we have this platform fabric, we can enable that IDP tooling to do things like discover all resources. We can enable the creation of blueprints or service templates simply by wrapping our capability CRDs. And the real force magnifier here is that now our capabilities can be used in conjunction with the numerous other Cates native tooling offered by the CNCF. Many of these names you'll recognize and the possibilities are wide open. So this vision is great. But how do we get our platform teams wrapped up, ramped up? How can they gain the skills to work with the Cates API and the muscles to build operators without stopping everything and retooling? Well, when we took a look at our existing app delivery platform, we basically had two choices. The first is a top-down approach where we describe our full application delivery journey with a big macro top-level CRD, which could then be decomposed into multiple sub-resources each with their own operators. However, we did that, we would be replacing our existing app delivery pipelines in their entirety. And we would have to deal with the route with things like routing and progressive deployments and stateful resources right out of the gate. And these things are hard. This is tricky stuff to cut your teeth on. Instead, we chose to start bottom-up. And it just so happened that our existing offering already had something along the lines of managed resources, something that we called simple infra, where we had capabilities that were being consumed through well-understood, completely encapsulated declarative interfaces. But while we could hopefully get some early wins from this, the work wouldn't prove out the full solution. And we knew that about all use cases would not be represented there. However, we could manage that uncertainty and make room for it later as we gain clarity. So we start with these smaller atomic sub-resources, allow our engineers to build out their tooling awareness and fluency. They can get a handle on what the SDLC workflows look like, things like repo structure, local development environments, CICD automation and testing can be patterned. Additionally, we can lean on tooling like Crossplane to defer the cost of having the team learn the full inter- workings of the operator pattern. Crossplane can allow us to extend the CUBE API with custom types without having to build our own operators. And using Crossplane's compositions and XRDs, we can offer the same simple infrastructure resources to our users while building insane defaults for security and compliance requirements. There is a trade-off to how much power we have, though. But because Crossplane registers native Kubernetes types, we can swap out the implementation of the reconciliation of those types later with our own custom controllers. So what happens when Crossplane compositions are not powerful enough? How do we handle something like dynamic compositions, a.k.a. those that have optional or varying number of sub-resources at reconcile time? Crossplane was meant to be a spot-checkable, declarative interface to resources. It intentionally does not allow for complex programming semantics like looping or Boolean logic in its compositions. So then how do we create n number of listeners for a single SNS topic, like in the example here? We can't possibly hard-code them all into this composition statically. So what if we want end users to be able to specify, you know, some number of email subscriptions? How do we do that? Do we go straight into building our own operators? As it turns out, Kubevela can help here. We can use it as a bridge between the simplicity of Crossplane and the complexity of writing our own controllers. It does mean that we introduce the Q language to our platform capability developers. But Q is a special purpose configuration language, and it's really the only reasonable option beyond pre-processing of manifest, like you would see with Customize or Helm. Using Kubevela, we can wrap crossplane compositions as components in either application types or traits depending on how we want them to be consumed. And then in our Q templates, we can do things like loop over resources or use Boolean logic to insert them based on parameters. You can see in this example how we fan out our subscriptions with a for loop. If any of you know Victor's DevOps Toolkit series, you may recognize this pattern. He did a video about this use case a little while back, a similar use case. And this is very powerful, but certainly harder to reason about when things go sideways, right? The lack of the spot checkable clarity of intent is a trade-off for power here. And there's still some discussion that we have internally about when it makes sense to make this trade-off. But this is just a glimpse into our platform vision. Now I'm going to pass it off to Greg. He's going to talk about more where we're at with implementing our solution. Thanks, Jesse. So as Jesse said, I'd like to talk about some of the specific technical challenges we've encountered on our journey. Also some of the solutions we've devised to them. Jesse did an excellent job showing our long-term vision and sort of this overarching picture of where we hope to be. But along the way, there's actually some very simple lessons we've learned about even the most rudimentary use cases, which I'd like to share. To that effect, I'm going to focus on a few problems that were not only important to our success, but I also believe our inherent architecture problems with the use of Kubernetes API in this problem domain. I'll be mentioning some cross-plane specific terms because that's the technology we're using, but really these problems are able to be generalized. And one pattern I want to highlight is this is how there's a unique combination here in terms of mature technology, but being applied to an emerging use case. And I see this over and over where Kubernetes APIs and controllers have a lot of established best practices, although documentation can be varying, but really the application of those mature use cases and practices to this problem domain open up new problems and you have to fall back on what were those original best practices and then try to pull them forward into this problem domain. So the first challenge we encountered was, Jesse mentioned this a bit, how can we own our API while also leveraging existing implementations? An important property of our API should be that we can modify the implementation or, as Jesse mentioned, swap it out later on. Maybe we want to start with something simple and swap to something complex later on or something in-house so we can be more expressive. So in this example, on the left I have a resource definition for an SNS topic, which is what we're calling simple inference. Sort of SNS topics and subscriptions end up being our hello world for simple inference management on this control plane. On the left, it's what I would expect the resource to look like if I was to ignore implementation and focus purely on API design as I made the CRD. It allows users to set a topic name and information for a cloud account to create the resource within. Then on the right, I have an implementation of this using cross planes AWS provider. This is an existing off-the-shelf resource I can use to make the topic and it actually works today. So I would expect the API on the left to end up creating the thing on the right, essentially. So let's talk through what the differences are between the two so we can build this map between them. To start at the most basic level, we have a simple problem, the API version at the top. This is quite literally a string for the purpose of identifying the implementation. And so while it would be silly to create a wrapper for solving this use case, it's a really good example of how exposing this implementation directly to our users immediately binds us into locking in the implementation. Not to mention migrating across resource types is harder than across versions. So try to imagine what that would have to look like. So just as a normal software development, the idea here is we don't want to leak implementation details beyond our API unless we have to. Second difference here, though, is also this simple field for specifying the ID of the cloud account in which to create this topic. In this case, we can make some assumptions because our users and our platform understand the constraints on this ID. Essentially, we want to ask our users just what is the ID for your cloud account. But on the right, we have the implementation, which is part of cross plane and can make less assumptions. The ID needs to be a field referencing what's called a provider config, which is an actual cross plane resource. Our users, A, don't need to know what a provider config is. And they, B, don't necessarily need to know the specific name of the provider config. They just need to know their account ID. And so if we went down this route, we'd end up leaking that implementation details. What are our naming conventions? What resources we're using under the hood? Fortunately, cross plane provides us the tooling for exactly this use case. So in addition to the resource controllers themselves, which is that implementation I mentioned for the SNS topic, they also provide us with this framework for building these indirection layers and for of these resources. So I don't need to go into too much detail about how it works because, frankly, they've got documentation on how to do that. And that would be a better use of your time. But the idea here is by using these tools, we can end up owning our API and also not have to maintain our own ever-growing set of custom controllers. The high level idea is that you specify a set of mapping rules between that API on the left and the implementation on the right via a CRD or via a resource. And if you also notice on the right, I'm sorry. So the high level idea is you're mapping information from API to implementation. But then also one of the more powerful tools we've found is this extra field on the right, which is region, where we're specifying where the resource goes. But if you notice, it's not on the left at all. And so this is, it ends up being defaulting, which is a really powerful feature because if you have to imagine, you know, your users are making an RDS instance and you don't want to expose all the fields so that they can configure them. This defaulting allows us to specify it for them and basically control what fields they have access to. So now I'd like to transition into how we handled multi-tenancy and mapping that to our cloud accounts. So I mentioned before that we specify this field on our resources to tell crosspan which cloud account they're placed in. And if we extrapolated that, we end up with this idealized design where it actually looks like this. Tenants can make resources in a namespace. Resources in a namespace point to a corresponding provider config, which is just a fancy way to say a cloud account configuration. And these provider configs each map to one cloud account. However, the reality is far less ideal. And that's often the case with any large organization, especially one with legacy software. We have all manner of cloud account strategies, frankly, and they range from sets of teams that are fairly large. We all share a single account, which is this A plus B on the left, to small teams who essentially make an account per application almost. And that would be the C on the right. So, but even more so than that, there's also a subtle but extremely important challenge to recognize that we may have sort of competing design goals at the different layers in this diagram. Within our Kubernetes dev platform, we actually have our own opinions on what is a tenant and how granular we want those to be. And also we have the design constraints imposed by Kubernetes, which may be unique. But then at the bottom, there's technical implications based upon your account strategy with these cloud providers. And also, we don't control them. It's up to the cloud providers and in addition to that, they may actually compete between cloud providers. So just as before, we need an indirection layer as you go top down on this diagram. That way we can pick our own tenancy model at the top and we can still rely on whatever tenancy model fits best to every cloud provider. To solve this, we use the same indirection layer, compositions and XRDs. We make our provider configs one to one with cloud accounts while making our namespaces one to one with the developer platform tenant model. We then use compositions to pull out account IDs, specified on each resource and use these to reference a provider configuration in any underlying implementation. And while I'm talking about cross-paying specific terms here, this problem is inherent in the use of Kubernetes API for this use case, regardless of the technology you're using to create these resources. So our solution is to inject these account IDs into a standard field on resources, which is frankly, I feel a very rudimentary solution. It's something we came up with because it solved our needs right then. But it would be great to see community efforts to both develop a more robust solution and also standardize this architecture so that it's not implementation or use case dependent. So hint for community members out there. One more use case I'd like to go into is resource referencing. And so again, to go back to the same example of an SNS topic that I keep going into, we also want to create subscribers to our topic. And as remember before, we also created this indirection layer. So what I'm showing you here are the resources that are our resources, not necessarily the implementation. There are lots of ways to design an API to do this, but we found that it's often best to avoid composing these two resources into that example where you have the, I keep calling it the WinObego resource internally, but like the resource where you have sub-resources within it that have properties. And that's a fairly deep topic which again, could use some documentation, but in my experience in building controllers, it tends to be an anti-pattern in this ecosystem. And a very simple reason would be the coupling of failure domains that you get when this occurs. So ideally a sub- misconfiguration of a subscription doesn't result in the entire SNS topic failing to reconcile. So doing this brings up a second challenge though. So we have our indirection layering system and now we have a topic and a subscription resource but they're actually thin wrappers around the implementation. Under the hood, during this indirection, we actually end up with resources that are not named identically or necessarily consistently with the original user-specified resource. So I'm gonna not go into too many details why, but the general idea is that some of these resources might be cluster-scoped rather than namespace-scoped and the end result is that we need to have more uniqueness on the implementation resources. So as you can see on the left, our user created a topic called test topic and a subscription which refers to this topic. The problem is now we need to know the corresponding unique name of the implementation resource for that topic so our subscription can reference it. You notice how the name on the right there has the unique identifier. So at this point it's hard to not feel like, well maybe we just created a bunch of our own problems for ourselves by overthinking this. We created this indirection layer, we avoided composition, and maybe we should just do away with all this stuff because then all of this would work. But you don't have to fear. Even though I couldn't find any documentation when I was at, this is actually a real world example too. We ran into this with some engineers who were cutting their teeth on similar resources to this and there's no documentation about really this pattern. But we were clearly not the first team here because there is a solution implemented to this within Crossplane, which is essentially that there are labels that get put onto these resources when they're claimed and therefore you can, and also there's a selector system to cross reference these resources. So you can use selectors in these labels to uniquely identify the original name of the resource, which is what's going on here with the selector referring to those two labels. And while this is a crossplane specific implementation, I again, expect any similar tool to have similar challenges because it's nice to have your implementations clustered scoped rather than namespace scoped for various reasons. So to wrap this up, I'm gonna cover some of the takeaways from our journey so far, it looks like we only have five minutes so I'm gonna skim over this a bit, but sort of the main one I would hit on is that is again this idea of unique maturity here where we're using a mature set of tooling, the Kube API and controllers, but applied to an evolving use case. The typical pattern here is we realize there's a problem that's gonna have a lot of impact on the success of our platform and therefore need some deep analysis, API design being a great problem. So what I would call out to everybody is that we need a lot of time and a lot of effort put into the organic growth of best practices and documentation and especially standards within the space if it's going to be successful for everybody. So to that effect, don't discount the value of just a simple technical blog, not even a code contribution. And lastly, I'd like to touch on a few of the engineering culture takeaways as well. Often these are even more important than the technical ones and we found this as well. In the interest of time, I'm gonna hit on what I found to be the most pertinent one, which is the fact that not only is this a technical paradigm shift, well sorry, this is a technical paradigm shift and therefore it's akin to my mind mostly almost to learning a new programming language. So imagine taking an engineering team and telling them you're now gonna work in some, you're gonna write your code in Rust from now on or some new language. There's frustration, try to ship a new app in a new language if you haven't done it in a long time and remember how frustrating that is. So you will spend the first few months in all of your engineers' will being less high velocity than you have been before. And you need to, one, account for that, be empathetic and also make sure that everyone is aligned on the value you're trying to deliver over the long term. Because when those frustrations and problems start to surface, if you're not all in agreement about why you're doing it, then how can you justify the cost? So with that, I'll go ahead and end it. Thank you everyone for attending and we should have a few minutes for questions. There's microphones on each of the sides. So if you go ahead and line up, we're happy to answer them. Thanks everybody. To ask. Yeah, I would like to ask a question. It really reminds me of an old problem in programming where it was a good pattern to make an abstraction on top of database. But as the software matured and libraries, people usually just work directly. Let's say it's like, this is a scale database. We will never go to MongoDB. And doesn't it feel a bit similar like you are building abstraction on top of something when this is something is not stable? Because probably like migrating topics from SNS to RabbitMQ will never happen. Or otherwise it would require changes to the abstraction too. When you said the abstraction of the database and sort of the pitfalls of that, you immediately think of ORMs, right? And not like all the channels associated with that. Yeah, so I mean, that's a general software design question. So obviously I'm gonna have my opinion on it, but I'm not sure. Other folks might have different opinions, but to me it's like, don't abstract the functionality, just maybe limit it, right? And that seems to be a pattern emerging nowadays. Like if you notice we're not making a thing called, in topic, that is a general term, but it's literally SNS topic. We're not saying you can get general. Not pretending, it's saying it like subscription. Exactly, it's not like general PubSub API. It's just a limited API for managing SNS. And in this use case I can see us following that pattern for the foreseeable future. Yeah, so otherwise you could, let's say like, maybe make some validation for AWS, CRD, just say, okay, users you don't, you cannot use this like 10 fields, use just these two, and it would also be like the same pattern. Just understand it. Yeah, that's what we wanna use the composition XRDs for. That's what the defaulting is all about, is we just limit the fields and default the ones to certain things. That's really just for security to be, yeah. Thank you, now I understand. That's really a good pattern in my opinion. Thank you. Thanks, we can go to the other side. Hi, hello. Okay, so if I'm correct, you're using a crossplane for like your top level CRD, and then also for the infrastructure stuff. Why not like write your own, sorry, write your own, write an operator for the thing that's closest to Autodesk, the top level CRD, and then control everything underneath with that. Yeah, it's a great question. So we are not, right now that top level resource kind of, Jesse mentioned this, we realized, you know, there'd be dragons. I don't know how else to say it. It's like, it's actually a really hard problem probably would go all in on like OAM with custom resources, but we are avoiding that top level resource. Sorry, I didn't get into the details of where we're at on this journey. We are focusing on the low level resources. So basically infra and compute, and then at the top level, we're sort of relying on our existing config and mapping it to that today. And I had this in one of my last slides here, but essentially that top level resource, the OAM Covella is less mature than some of the other tools we've seen. So we kind of, we're waiting. So, and then to clarify, you did lift and shift from Terraform to crossplane? What was that like? Yeah, so in many instances, yes, right? But only in instances that are already encapsulated in such a way that the users don't actually need to care about what's under the hood, right? So we do have a lot of users that are very familiar with Terraform, and we will continue to support them using Terraform controllers. Of sorts, there's a couple of different options there. But for the simple infra, they're already using that through an interface that doesn't have any understanding of what's under the hood. Yeah, go ahead. Okay, thanks. Yeah, thanks for sharing your experience with us today. I would like to know how many CRDs are you and you have in your cluster? And second question is... Well, yeah, actually really funny. I mean, a ton, because out of the box of crossplane, you just plops everything down on the cluster, right? That's a known issue. Yes, so this is my question. How you handle the issues with a lot of CRDs? Oh, is that what you're asking because of the performance issues associated with it? Yes. Yeah, exactly. Interestingly enough, the Kube API, as far as our experience has been, is totally fine with it. It's all the other tools that talk with the Kube API that end up breaking. It declines, yes. It's like anything that tries to be like, give me all the CRDs. I don't like Argo, doesn't like that. So the short answer is we avoid using those tools in those use cases is really what it comes down to. Like the reason, as I mentioned before, we don't have that top-level resource right now. And that's kind of how we can get away with this. We're still translating a config file that we own into these low-level resources with this end goal of eventually doing the top-level one. So we don't have those tools being like, give us all this, like, an Argo trying to map that top level so you can type everything below it. We just don't have that problem. What did you use? What did you use? It's literally JSON. Like it's not actually anything fancy. It's in-house tooling that sort of our legacy known needs to get swapped out. Okay, but the client-side throttling issues? Yeah, so we're not, and this might be another part of why we don't have it. It's like users don't get access to our Kube cluster at least today. So that's kind of the main thing, I guess the theme is we're not directly integrating many tools with the Kube API. We're using it really to give us a clear interface and manage our own resources for ourselves. Okay, thanks. Oh, good. I've got more observation and I hope it's not something you thought of already, but you pointed out the select label you created for the namespace, you named it and then you brought it to a cluster level and then you referenced it. It's kind of a select label. My point is, if you're looking for patterns, it's already there. Look at the deployment and how the replica set references, deployments and ports references of the replica set, right? There are examples there. Yeah, exactly. That was actually sort of exactly in line with what I was getting. It's like the patterns are all there. It's just a matter of being like, oh yes, in this problem domain, here's the pattern to pull forward and apply. That's still correctly, right? Yeah. And think about the reference ID, how the ports reference the replica sets that created them. I mean, that's the part that's not there, but that's also there. If you steal it correctly, it's there. Yeah. So I'm guessing you do work on that tooling at all? Or? I work on Kubernetes for years. Yeah, it's what we need. We need people to produce that. Here's the patterns apply them and everyone else would be able to use it more quickly. Thanks. All right, I think we're over time. I'm happy to stick around and answer questions, but thanks for joining.