 Thanks for joining us, everybody. Hope we're all having a great KubeCon. Almost the end, I think two more talks after this. So thanks for joining us at the end of the conference here. I hope we're all having a great time. So Jessie and I are here to talk to you today about rapid IDP development. So this topic is a topic that has a lot of interest right now. It's something that we're very passionate about. And not only that, we've been working on a lot of open source tools and technologies that we'd like to share with you today. So one of the more interesting things about this talk is at the end, you'll see some QR codes where you can find all the source code for our demos and the tools that we've developed here. But first, I'd like to introduce ourselves. So Jessie, you want to take it off? Sure. We're going to come over here. Are these hot? OK, there we go. All right. Yeah, I'm Jesse Sanford. I'm a software architect at Autodesk. I work on our developer enablement team, which handles our internal developer platform. And when I'm not doing that, I like to work on software supply change security. So you might see me speak over there sometimes too. And when I'm not working, I like to sale and spend time outdoors with my daughters. Thanks, Jessie. I'm Greg Haines. I'm also a software architect at Autodesk, focused on our cloud software delivery platform. Most of my background's in open source cloud software, so I'm fairly passionate about it. And I guess, since we're talking hobbies, what I'm not working is lately snowboarding a lot with my two kids. So what are we here to talk about? First, I'm going to start out a bit with the why. What are the challenges I've been facing at Autodesk? And I'm going to try to convince you all, based upon these challenges we've had, what is the value in creating the tooling to create internal reference developer platforms, or IDPs? After some of that history and context, we'll start to talk about the tools we created in open sourced. So these tools really help accelerate the development of our internal developer platform. And I'm hopeful that they can be useful for you all as well. And then finally, Jesse has an awesome demo he's put together that he's going to show you also. Play to the demo gods that everything goes smoothly. But his demo, it's really interesting. And it's basically how you can use these tools to do this rapid IDP development. So how did we get here? So at Autodesk for a while now, our internal developer platform really consists of Jenkins, Spinnaker, and a whole bunch of in-house tooling that glues it all together, and automation to perform configuration. It's a pretty common, albeit legacy, software development platform. And it's similar to what many organizations have today that haven't really made this transition to this full cloud native development environment. The problems with this platform that we've realized a while ago is that we struggle with velocity, and especially related to shipping changes with high consistent quality. Large changes are risky, and then therefore take a lot longer than they should, and cause undue toil for our users. So I'm going to talk briefly about what problems we set out to solve in this transition. But I'm not going to go into too much detail, because this was the focus of a talk we had last year. So I highly recommend checking it out if you want to really understand what the problems we were facing, how we framed the opportunity in this transition to this new cloud native developer platform. But I'll hit on some of the high points just so you can get some context here. In short, we realized that we had significantly drifted from the norm. And there's all this cloud native developer tools out there that are all open source. And we really wanted to be able to leverage them and accelerate our development. But at the same time, we're also a large organization. And so we need to make some long term investments so that we could scale the development of our platform. Given this, we decided to double down on a few things. The first one is GitOps, and then also using Kubernetes as our standard control plane technology. And so we've done things like use off the shelf controllers and then also building a whole bunch of our in-house controllers to expose capabilities to our software engineers. And by doing this, we'd be able to leverage standard tooling like Argo CD and Backstage that can be used across this suite of functionality instead of writing one-off integrations for each of these tools whenever we go to deliver some new dev platform capability. And that's where we found ourselves about a year ago. We were building out this new platform, migrating pieces of functionality to it. And it turned out we were right about many of the wins. Future development was a lot faster. We could teams could work independently but leverage one another. And we were able to form a lot more validation in this new environment. Most of all, though, we were closer to the norm. We were able to increasingly leverage open source tooling and solutions and practices in our development that we didn't have to create in-house. We've seen significant wins from this. So why are we back here? What's the point? Well, it turned out there was a very interesting set of problems that we ran to not long after. And the main one is that reality starts to look more like this. And the emphasis being on the segmentation between these technologies, it's actually really hard to make a single developer platform. And it's far too easy for teams to work within their spheres of influence. So it's a real shame, especially when you look at those of us who have experience with these technologies, the wins usually come from workflows that span all of them. And so this is truly a case where the sum is far more than the parts. And so when our teams and our software is treated like an individual piece of technology that doesn't interoperate with the rest of the platform, we're regularly missing out on the bigger picture wins we could get. And maybe some of that's just my architecture hat. But we felt very passionate that we could deliver a better experience by enabling our engineers to work more closely with one another. So as an example, we can expose CRDs for all sorts of things. Like maybe we can create a CRD for exposing infrastructure via the Kubernetes resource model. But without ARGO CD providing some standard GitOps workflow or without backstage making that discoverable and easy to use, it's just not that valuable to our users. And we keep hitting this. So what could we do? Well, let's walk through what it looks like in reality before I get to the what can we do, actually. So the problem lies that we realize is what's needed to enable these teams to work together is a really cohesive development, CI and CD experience. So we might envision the world actually looks like this, where we have a set of technologies powering our dev platform. But we found out this is actually very challenging to implement. So this is kind of what our tool focuses on, is how can we get to this world where there's a set of technologies altogether that have a consistent dev, CI, and CD experience. So again, I'm going to walk through an example here. Imagine you have a developer portal team using Backstage. And they're really experienced in Node.js software development. And what they need to do is add support for change deployment and promotion to the dev portal. And like I said before, we're following typical GitOps practices. So we're using Argo CD. So now your Backstage dev portal team needs to find a way to integrate with Argo CD to develop this feature, let alone understand it. But they also want to show things like status of deployments and the health of deployments. And the dev portal is part of promotion, seems like an obvious thing to do. Oh, and then on top of that, you're also creating some custom controllers to manage infrastructure and cross-plane. And so what you're really asking this heroic dev portal team to do is to build off these integrations with what your dev deploy platform looks like. And I don't know what your role as a dev deploy platform looks like, but at least for us, it looks very different from what our Node.js application development dev cycle looks like. So even worse than that, it's challenging to develop and to get the shared understanding of what this functionality needs to look like. But then you go to CI these things. And it's even more challenging there. Your pipeline for doing Node.js software development isn't really set up to do the infrastructure or the dev deployment pipelines. So while this might seem like a contrived example, in my experience, this really breaks down our ability to build that cohesive experience. Because these two teams, they're just not speaking the same language or thinking about solving the same problem. And the whole spheres of influence problem comes right back into play. Or what's the word? Someone's law. I can't think of it right now. But basically, these teams are working independently. And we need them to work together. So not only is this a problem from an engineering point of view, we need to get the PMs, the UX and designers, the architects. We need all these teams speaking the same language. So I went to a great talk yesterday, I think, was by DIMS and some of the other folks on the SIG architecture committee. And one thing they mentioned was their job as architects is to counter this problem where there's teams working in their own spheres of influence and try to find ways to implement solutions that might span the set of different software teams across the platform. This is exactly the problem that we realize as well and that we're after tackling here. And what we found is that without a simple way for the teams to use and experience their overall dev platform, the shared vision is very difficult to develop. And this is what we were after with our reference platform. So how did we build this? We started with what is the lowest common denominator for a set of technologies that we could build this upon? And to us, that was a GitOps and Kubernetes. Because we're using Kubernetes as both a runtime and a control plane here, but then you need some ways to get configuration there, so GitOps. And for us, that was Argo CD. And then it was like, well, what if we could take the different things that go on our dev platform and come up with some reproducible and redistributable packaging format that we can inject these into our lowest common denominator here. And one of the things we did, which is inspired by this project called KO, I put a QR code in the corner. And I also saw someone else gave it a shout out earlier today in the image building talk. But this is one of my favorite tools that I love for doing controller development. So I highly recommend checking it out. But essentially, at the top here, where you see the spec source repo URL, that's a snippet from an Argo CD application manifest. And essentially what we did is we came up with some tools that allow us to rewrite the source for an Argo CD manifest to a relative path to the resources that belong to it. And with this, we can now package up an Argo CD application, a set of Argo CD applications along with all the resources it installs. And we've now got a redistributable package for installing things into our dev platform. And so with this, we're able to run our tooling, which Jesse's about to demo, and package up all these different applications plus resources into this base set of components into a single redistributable binary. And with this, all of our teams are able to now use this tool. They run it. It's called IDP Builder. And it essentially stands up the dev platform plus our own set of packages. And that's used as our reference implementation across dev portal, infrastructure, custom operator development. But also, most importantly, it's used in CI as well. So now we can do integration testing between these different pieces of functionality. And lastly, I want to mention that we didn't do this alone. So early on, we started this open source working group called Canoe. There's a QR code to one of the projects which is on the Canoe organization. This working group has developed this tooling called IDP Builder that allows us to inject these packages. Participating in this open source project is really what allowed us to get to this level of maturity rather than working on our own and solving the problem that would only work in RCI pipelines. So I think Jesse's got some good call-outs at the end of his demo about the other folks who are involved. But I guess the message here is by leveraging our community, we're able to deliver something far more valuable. And with that, I think it's time for your demo, Jesse. Awesome. I just want to make sure that we're all right, good. So can I see a show of hands for how many people have passed the CKAD exam? Anyone? Well, that's great. That's great. How about folks who've taken it and haven't passed yet? Well, I'm in that boat. I'm not embarrassed. But not everyone is writing controllers yet, right? It's not the case that we're all operator developers. And we need to bring a lot of these different developer backgrounds together. And that's part of the tooling we're going to show here. I actually asked my daughter to interpret what she thought CKAD was. And she came up with cool, kitten, alligator doctors. Or cool, kitten, yeah. Cool, kitten, alligator doctors. That's what we got, right? Unfortunately for her, that is actually not what it is. But if you didn't already know, and not everyone does, it is the case that we have backstage developers and others. And one of the goals of this tooling is to actually make it easier for them to work together. Even if you are a operator developer and you're well-versed in the KRM and the Kube API, how do you do your testing? How do you collaborate? How do you actually get your code interoperable with someone's backstage plugins? So we're going to show some of that. What we're about to see is standing up of our IDP builder, installing our CNOE reference implementation package, which, like Greg said, comes with backstage cross-plane, Argo CD, Argo workflows, and a number of others. And then we're going to look at a very contrived Kube builder source tree and use KO to actually build and deploy that operator to our local cluster and then generate from the CRD a backstage template that we will then look at in backstage and be able to hydrate. And that kind of is like a soup to nuts workflow for what might be considered a capability development for your IDP. I do want to call out that we do have a repository. You can check out that QR code if you want. And we do have stars, but we need more of them. So hint, hint. Go ahead and help us out there. We also have documentation at cnoe.io. And again, the QR code will kick you there. And it's good, but it could be better. And so we are accepting pull requests. So hint, again, you're welcome to come help us. And now we're going to try a live demo. And I don't know how this is going to go. So bear with me. This is KubeCon after all. And live demos are notoriously bad. Give me a second to switch over here. All right, so let me just clear this. Is that visible to everybody? Yep, wrong browser. Looks pretty good. I can make it a little bigger. There we go. All right, so as you saw, the project page, like most things, will have some package releases. You're welcome to download those binaries. Or you can download the source tree for yourself and just run the make command and get yourself one for wherever you are checked out. Here I'll just show that we have our source tree here. Now it's really big. And in there, there's our main.go and so on and so forth. So I actually have it already built. And I'll just run IDP builder so you can see. We have some nice help output. Let me get a little bit smaller. And we're going to be focusing on the create command today. There's some other switches there, other commands and other switches. And in the create command, I'd see IDP builder create and still help. You'll see that, oh, got to remember, not in the path. All right, so you'll see that there's quite a bit of an output there where we're going to focus on this package durr string, like Greg said. We have the ability to take a local directory, wrap it up into a place where Argo CD will be able to deploy it for us on our local cluster. I'm just going to show that I'm not doing anything tricky here. We actually don't really have anything on this other than cert manager and the other kind, the basic kind namespaces. So I'm going to just kick off a run here. It might be the case that this takes a little while. So I do have one trick up my sleeve. I have another kind cluster running in the background with everything already installed. So live demos, right? Let's see, code create and dash, dash package. There we go. So the path of the reference implementation is just built into the source tree in the examples folder if you're trying to do this at home. So there you go. It's going to stand up some controllers locally and start reconciling all of our, all of the resources from both the core packages that are built into the binary itself, which consists of Argo CD, Giddy and Ingress Engine X and then it's going to move on to installing everything else from that reference implementation package. So I'm just going to switch to a different window here. Oh, let's see, let's do, so let's get a context. So I'll just let you see that I do have everything running over here. And, oh, wrong one. That's strange, demos. All right, let's just switch back to a different VM. Sorry guys, who would have known? Not this one, let's do this one. You think you have everything prepared? So while that's working, let's see if that's going, I'm going to bring up my backup video. So we are well-prepared for this. It'll open. What's happening? That's it, okay, all right. Sorry everyone, I'll try to, oh, that's the wrong one. It wouldn't be fun if I wasn't plagued by all of the demo guides that everyone who goes to Gugran is, right? So we'll just skip ahead a little bit. So we'll see here, we move on, show that, okay. That's running, all right. So we're switching to the local dev. And so these are the namespaces. Let's make this bigger. These are the namespaces here that you would normally see after the reference implementation is installed. We have Argo CD crossplane and so on and so forth. I'll switch back to the VM that did the install right afterwards so that you can see that it's actually running. What's nice is that we do install all of the GUI interface as well. So you can see that Argo CD app comes up. It's on localtest.me, it's just basically a DNS trick to go back to your local machine. Nothing special there. It's essentially 1-2-7-0-0-1, but it helps with working with Ingress Engine X and HTTPS. So we'll see here this is on 9443 now. And then these are the Argo apps that come with the reference implementation. We have the Backstage app here. Click this here. Argo CD and crossplane. We click through the Backstage app. You can see some of the resources, so on and so forth. So this is basically a vanilla install of Backstage. It's essentially generated and then we build the image with some CNOE kind of like wiring up on top of it to get it to work locally with IDP Builder. Nothing much there, but there are some CNOE plugins that are interesting. If you do Backstage development, we have some friendly scaffold or actions plugins that help with building Backstage templates, right? And you'll see here the Backstage catalog is essentially empty when it first starts up. You would imagine that on your local machine. We'll create here so that you can see the templates. We actually don't have any templates in there by default. And what we're gonna do is we're actually gonna put something in there, right? So in order to do that, we actually need, the whole purpose of this is actually to plug in an operator, right? So we have this contrived example of an operator. And I'll show very shortly the API types so you can actually take a look and see that it is very contrived. It's really a wrapper around deployments. So let's jump ahead. Okay, yeah, you can see here that we just basically taken replicas and image. I'll just scrub a little bit farther. So we can make the manifests, right? This is kind of typical of a Kube Builder workflow. We make those manifests. We generate the CRDs, right? And then outcomes are deployment. Yeah, well, sorry, the manifest for the CRD itself. And you can see here that it contains the replicas and the image data fields, which it's gonna be a little bit hard to get exact, but they're in there, right? So let me take that CRD. Actually, before we do that, we'll use Co to actually build the image itself and then deploy that directly to our local registry so that it can actually be picked up by our cluster. And we use Giddy and it's OCI registry to contain those images. So you'll see here that we actually set up the, let me move this out the way, the local Giddy registry with this co-environment variable. And then we basically piped that through Co and then Co's gonna spit out the manifests to actually do the deployment. So again, if you're thinking about this, I'm building an operator. I'm kind of like rapidly iterating on that. I'm deploying locally and then I'm actually, you'll see the full workflow. I'm actually able to see what the output of a kind of scaffolded template based on that operator's type would be, that I could then hand off to template folks or backstage developers to then kind of make it more user friendly and so on and so forth. So this is the manifest that's output by Co and you'll see here that it's basically, it's pushing this, well it actually doesn't, when you're in local mode, but you get an image built on your local machine that you can then push to your registry, which you'll see me do shortly. This is basically just containing the binary for the controller. So I'll push it locally. Again, when Co is in local mode, it won't automatically push for you. So that's now in the registry on the cluster where everything else is running, including backstage. So let's take that CRD over to this other tool that we built at CNOE called the CNOE CLI, which is kind of like a multi-function Swiss Army knife of sorts tool. And one of the features that it has is the ability to take a well-formed CRD and output backstage templates. And then you're able to kind of like sprinkle in some extra templating on top of that so that you get the right actions for deploying your CRDs if you want. And I'll just run the CNOE help here. This is actually also inside the CNOE org. It's open source, you're welcome to download it, but it's pretty handy. I mean, the templating feature is actually very handy. And so I just show here, which I'll show you shortly, that when I switched over here, it's showing that this is basically the output of a run of IDP builder, and it'll give you a Argo CD address to hit and the password there. So now I'm just basically showing that the CNOE CLI is here. We'll skip ahead. So let's run that templating tool. We'll generate the backstage CRD. If I can get to it, apologies. So, well, in there, there is a command line argument. I don't know why it's so hard to catch it, but we produce the template, and then the template will be what you would expect. It's a backstage template. It takes the spec from the CRD and basically just drops it in to the backstage template. There you see the image string and the replicas. Oh, and again, I call out the CNOE plugins. We basically have some plugins to help make it easier for you to shuttle along YAML into Kubernetes, amongst other things. So we're here, see here, that I actually have to push that template to Giddy, right, in order to get it to load into backstage. So I create a repo in Giddy. This is all in the local machine. Create a repo in Giddy, and then I push it, and we then have the template located in a place that Argo CD, oh, sorry, that backstage can grab it, right? So here's the template that we just created from the CRD. So we take that raw URL, bring it over to backstage, right, and then we go to the template's interface and then register a new component, drop it in there, and it runs it to analyze and then loads it, and now we actually have that new template inside of backstage. So this template here is directly generated from the CRD that we just created inside of our Kube Builder project. So you'll see here, it's pretty boilerplate, right? Like it surfaces a lot of fields that you might not wanna show, but maybe you wanna use in some templates. But anyhow, we have the replicas field, we have my deployment, we have test, or sorry, we have the namespace field. So we can actually go ahead and try and create that. And unfortunately, the first time I ran this, I realized that I had not actually installed the types. When we ran Co, which maybe you caught there, the output wasn't piped directly to Kube Apply. It was only shown to the screen. So what this action is telling us is that, hey, I went and checked the local Kubernetes, you don't have that type, you shouldn't even try to deploy, it's just gonna fail, right? So that CR was destined for failure. So let's go back to the Co. Actually, we also, what you see here is I'm actually installing the cluster role and cluster role binding for backstage to be able to write those as well. So we're gonna run Co again, but instead this time we're gonna pipe to Kube Apply. And so then that's actually gonna write the manifest directly to the local Kubernetes. And you'll see it stands up the controller as well. So now we have the types inside of Kubernetes. So if we just start over, we can actually try to just redeploy with the same variables, but actually I realized we should probably at least do one replica. And you'll see here that it actually makes it all the way through the steps and we get our apply manifest. When I show the logs, we'll see that the CR was correctly hydrated and deployed. And also we can actually go ahead and check the Kube API and just see if those types exist anywhere in any namespace. And yeah, there's our test CR which was deployed. And if we describe it, we can see that it actually has the contents that came directly from backstage. And actually we can actually take a look at the controller logs. There's actually a bug in this controller. So if we were to make this demo longer, I would actually show in going in there and actually redeploying the controller and iterating. But I think you can imagine, right? So that's it for the demo. Let's see if that cluster actually came up. If that VM's still running. Can't solve. Ah, no, didn't come up. Okay, well, if you're interested, I can run through it again afterwards. You can come check it out on the side. I know that might have been either too quick for some or not quick enough for others, but I was trying to solve for demo failures and I did not make that happen, my apologies. But thank you very much. I think just to put a capstone on it, the idea is that we have these workflows. They need to work together. This tool allows us to be able to do that in a way that's portable. So I can make changes to my operators. I can make changes to my backstage. I can put those somewhere for you to download them on my GitHub or elsewhere. And then you can pull them down to your machine and you'll know that when you get those resources in your machine, if they're packaged in the right way, you're gonna be able to deploy them locally and we'll all be able to play nice together. Thank you. I think we probably have a... Yeah, if there's any questions, we're happy to take them now. I think there's two microphones on the sides. I just like to say I've seen this and used this before and it really works and it's great. I already deployed the Canoopio CV4 which is really helpful. Something to add maybe is it's an opinionated solution at the end, right? So the question is whether the community will be building that up to also support other possibilities. But thank you so much for the work. Thanks a lot. Glad to hear it's useful. You wanna talk about the opinionated? It is opinionated, so correct. Yeah, there's a couple other questions. So let's go one on the right first. Hi, my question is, do you have any features related to backstage except importing templates into Sky Folder? So again, there are these... We do have open source backstage plugins for the Sky Folder and I believe there may even be a front end plugin as well. One of the other things that you don't really see in the demo there is that we've wired up a local key cloak. So we've essentially focused on local environments and building locally, but I think it's plugins, Sky Folder actions, which you'll find under the CNOE org is it comes with some serialization plugins for YAML that you can do prerequisite checks against Kubernetes types as you saw. There's a couple of other things that we have. Okay. On the other side now. Hi, yeah. So is this like a proof of concept tool that companies are meant to take and expand for their specific use case in their company? Or is it a backstage, our growth CD opinionated tool that's going to keep expanding the future for companies that just want to take it and use that as their internal platform and use all these tools that are just open source? Right. Let me take that one. Yeah, so probably the best analog. The inspiration of this is, this is a tool that I've worked on across three different generations of Cloud Dev Platform stuff. It's like, think of MiniCube or think DevStack and OpenStackLand. It's like, just you need, in order to make development productive or like to get people on board with the technology, you want to have the like run the easy, run the one command to set everything up so you can play with the thing and you can prototype and you can build on it. So that's the intent. It's like, let's glue a bunch of stuff together and find a way for people to inject extra things on top of that and then you can use that for local development and CI. So your question about like, what's it to be used for? There's actually a handful of companies already doing the same thing we do, which is we use it like, if you go to our control plane repository where you write all our controllers, it just runs this IDP builder tool and then you give it some packages in that repo for local development and then when CI for that project runs, it does the exact same thing. It just runs IDP builder to set up a CI environment with all these technologies and then runs the test for the controllers on top of it. Yes, it's very portable. So the target of this project is platform developers? Yes, exactly. Thank you. Absolutely, absolutely. Switch to the other side. So great tool and great talk. I think we all know about this and at the last take, I think we have built a couple different versions of the reasons for something to look like this to pretty much design our internal productivity tools. And I'm super curious about how, because the part that we have felt the pains of is replacing the GitHub part, which we just didn't think of using Github at all. So how do you use Github or any other, I assume, at your production side and how do you feel it was replacing it with Github? Was it close enough to replicate what you wanted to do on the actual other Git provider you were using, if you were to say any? Yeah, it's a really good question. So it's definitely not close enough to replicate just because there's all sorts of GitHub-specific things that happen, like your webhook events when someone creates a PR and things like that. So yeah, in production, we use GitHub Enterprise, but for all intents and purposes, Github. And that's what our CI runs against as well. It's not running against Giddy. The Giddy was kind of a stopgap just because it was actually not so much to replace Github as much as Argo CD needs something to read the resources from. And so Giddy was the easiest way to give us that Git server. And it allows you to work locally, right? Yeah. You could use IDP Builder essentially if you were able to dig in there and scratch out the places to replace it. You could do it against GitHub that's elsewhere, public GitHub as well, but it wouldn't be as useful of a user experience. But we are talking with CD events. And some of the folks work on that. So there's a lot of potential for us to try and decouple GitHub from things like Argo CD. And that's what we're hoping to use with this project is drive those interfaces so we can plug those parts out and not make it a hard dependency. Makes sense. Thank you. Yep, go ahead. Hello. Great talk. We are very interested about the tool. And we have tested it. We have tried it in our company. But we want to know if you think that this tool is a production-ready tool or do you think it's also in a, yeah. Well, I guess I'll take that. It's production-ready in the sense that it's ready for you to run it on your local machine. I would not expect this tool to produce anything that you put into an actual production stack. But it is helpful for getting things that are going to production to be tested in the CI tool chain and elsewhere. But for all intents and purposes, I would use it for my local developer workflows. But I would not try to stand this up and host anything with it or what have you. The reference implementation that it's built upon, I think, is being actively developed in a way that can be used in production. But that project is also under the CNOEI org inside of GitHub. And it's a little bit separate. We do include a version of it in the source tree that you saw there. But that's not the version that you would be deploying to production either. It does contain the same tooling, but just kind of different configuration. Thank you. All right, that's it. Thanks a lot, everybody. Oh, yeah. Big shout out to Minabu, and Charles, and Nima, and the rest of the CNOEI team. Minabu gets MVP. He really did a lot for this project. Yeah. Thanks. Thank you.