 Hey everyone, welcome. I'm Ritesh Patel, I lead products at Nirmata and today we're going to talk about backstack, so Yeah, no, and I'm Murph. I'm Responsible for field engineering in EMEA at Upbound Awesome, so let's get started So, you know, thank you everyone actually, you know, it's nice to see so many people This in last day of coupon show up for this talk So this is the agenda we start we'll talk about, you know platform engineering and you know How that's kind of evolving and the value it provides And then we'll get into the the actual, you know, meat of the discussion On on what we've kind of seen the patterns we've seen emerge in the community and and you know how we've collaborated together to Come up with with a reference architecture for platform for a developer platform So I'm assuming most of you know, you know What platform engineering is all about? There've been I think last year. There were like barely any talks on on this topic, but this year the 20 plus talks you know platform engineering ultimately enables Enterprises to deliver shared platforms that can be leveraged by by multiple teams and As a platform engineer, there's several things that that a platform engineer is responsible for right from designing the platform to You know implementing it, you know, making sure all of the various capabilities are automated And then operating that platform providing any assistance needed for troubleshooting You know monitoring things of that nature and and and ensuring that you know Platform continues to evolve as the need of you know, different applications and teams evolve So obviously all of this is kind of you know Work like there has to be some business benefits for any organization to invest in this right so some of the core benefits that that You know, we see organizations kind of get out of investing in in platform engineering is You know reducing the overall complexity having to deal with, you know, a single platform versus multiple disparate platforms One one goal and the benefit is Enabling developer agility with things like self-service eliminating eliminating, you know silos and different teams doing different things Obviously helping in helping with collaboration Having shared platforms enables, you know, better resource utilization. So that also helps overall optimize overall cost and then, you know Having to just secure a single platform versus a lot of different, you know infrastructure Mitigates risk. So these are some of the key benefits, you know that we've seen Initial adopters of platform engineering Get right. So so when you start looking at what does a platform require? Some of the things that a platform kind of requirements that come up are you want the platform, you know, to be composable? There's no one size fits all every organization is different You know, so being able to pick the right tools pick the right, you know Components for a platform becomes important if you're running on cloud I mean, one of the reasons Kubernetes is adopted is for its Uniform API independent of, you know, which infrastructure you run on. So obviously being cloud agnostic is a key key requirement and most of the times Using components if you're, you know, Kubernetes native helps actually, you know Make the platform easier to operate And then, you know in in this case We'll show how we'll show how so we are using some of the mature Cuban cncf components for a platform and You know as you as the platform evolves you want that flexibility extensibility to be able to add more capabilities as as needed So if you if you're not familiar actually cncf as a platform working group They've active, you know, put out a ref kind of a high-level I Wouldn't call it an architecture but more like what a platform should include what are the different, you know Components and and even highlighted some other projects. So we're gonna kind of go off of this Reference architecture and show you what we have, you know a scene Some of the earlier adopters do build for a platform. So and we are calling this back stack back stands for Four projects that are involved in this reference architecture first one being backstage That's for developer portal. Most of you may be familiar with it several talks, you know on backstage It was originally conceived by Spotify and then donated to cncf Next Argo CD GitOps is very important. So, you know Argo CD is one of the core Components of this stack and to provide that gitOps capability and you know shout out to Nick here from Argo CD who kind of helped Collaborate on this on this stack and this platform. So, you know, thanks Nick Next one is crossplane crossplane as you may know is universal control plane helps with provisioning automation and you know, we'll go deeper into how this enables a platform and and finally Kibirno for policy escort governance security automation For your platform, right? So together these form the back stack and we're going to jump into the demo Murphy is going to lead the demo. Yeah. Yeah. Thanks Ritesh So let's have a look at how all these things are put together and we're not the first people to combine these four technologies we represent The organizations that are chiefly maintaining these technologies and we wanted to build An example of how you can put them all together because if you're starting from nothing it can be quite challenging So think of this as the seed for For what you can do and to highlight some of the key areas of integration where the different technologies are complimentary To begin with Let me actually start with this little diagram here Our use case today is as a platform engineering group We are tasked with exposing several different capabilities to our organization, but the first one is Kubernetes clusters as a service or parts of the organization need to spin up Kubernetes capabilities in different clouds. They need EKS. They need AKS and we're going to give it to them so the backstage environment the catalog itself Contains the the components of itself in the catalog so you can see all four of them Sitting here as running in the demo cluster. They're all running in a single cluster but one of the things that we've created is We're using the backstage scaffolder to create claims for infrastructure We'll talk a bit about what that actually means, but let's go ahead and ask for a new AKS cluster We're going to give it a name that we're going to use to to respond to refer to it. Let's call it back stack demo Very creative. We're going to put it in West Europe because I know that I have quota in West Europe We're going to make this thing real big. We're going to say we want five Five nodes in our our node group and we're going to go from there Ultimately our scaffolder is going to create a pull request in our repo. So let's identify the repo here cross-plane Trib and We're putting in the back stack repo because mono repos are fun the Scaffolder is really not doing a whole lot of heavy lifting one of the advantages and challenges with backstage and I'll admit that about 80% of my time building this demo was spent learning how to use backstage because I'm not a react programmer and about 90% of the remaining 20% was spent getting my install script to work right because I really wanted to be able to just say Bash install and and have it all work and as anyone who's ever built a demo knows that's where everything goes in your time so I want to highlight that because Barely any of the time building this demo was actually spent building the parts that are getting the work done But they are getting the work done and so we'll look at what they look like. So this creates a pull request We open that up and let's go over to GitHub and accept it We can actually see the The request has come in the shape of the claim here that big enough to see yeah, excellent Is a Kubernetes resource? This is a custom resource type. It's exposed by cross-plane cross-plane allows you to define your infrastructure API and And expose custom resource types for it. So all backstage had to do was scaffold out that claim yaml and once that gets moved into the control plane the control plane can go off and do its thing How will it get from here into the control plane? I wonder So let's confirm the merge and we'll follow it around the circle and Before we go. I want to highlight that we have these checks running the DCO check is mad at me because it wants me to sign things off but here we're we're running a Caverno workflow where we're able to run Caverno in the in the the github as a response to the pull request and Validate the cluster ahead of time so we can deny Pull requests at this level and then we can again run those policy same policies Inside the cluster to perform audit and enforcement Once the request gets submitted into into the environment So that should be enough time For this to synchronize and I will collapse the pre-built one So this is our new Backstack demo and I didn't put this on On auto sync because I like this part. This is my favorite thing Just starts going right so all that was submitted into the control plane was that single YAML that single claim and The composition as defined by cross-plane is going out and it's build it has two Sub-compositions here one for the network one for the Kubernetes cluster itself And then each of those has a set of managed Resources that compose together the total platform that's being exposed And so great. We've gone from UI to Git Awesome, we've gone from git into our control plane great and now we're building out External infrastructure that we can use but it would be really lovely if that external infrastructure Made it back into backstage so what we want to see now in our resource list are the Existing clusters now. I apologize for the formatting here. I remember I said I am not a react programmer So the the styles are a little sideways But we have our our Kubernetes clusters here and we can even see the Cube config the cube configs that are registered by these clusters are registered as secrets in the Control plane and then we can externalize them to external secret stores in this case vault so the We're using vault in a very insecure way. It's configured in dev mode, which means you all know the token But luckily it's only on the loopback interface, so it's fine The Cube config here is is available so that if we want to start talking to our cluster We want to start deploying things into our cluster. We can with any tool that's available to us But wouldn't it be great if that cluster was already registered within? Argo so that we can start Targeting applications there so wiring so not only are we loading the cube config into vault so that External users can use it, but we've already registered that cluster ID Into Argo so I can use my other not the radar my other scaffold a new application deployment and For our purposes, we're going to borrow the guest book from from the Argo demo environment, so we'll just call this guy guest book and The source repository and let's see if I can type this without error And that tell me if I get it wrong We'll see the example apps All right, and then the path is just helm not absolute path this book and We want to send it over to the pre-built cluster because the demo cluster is still building and We will Again, we're just creating a pull request, so we're going to go around the circle We're going to go around the circle again, so crossplane contribute back stack Now we're creating a new application request pull request Fetch our template scaffold our pull request and let's go accept that pull request All checks have failed because I did not sign it off We we just moved this into a more stringent repository last night So I didn't have time to add the sign off to the scaffolder Fortunately, I'm an admin on this repo so I can push it anyway So now we have our our application is Set up in the Set up in the git repository, but nothing is sinking here because what we're missing is an app of apps to Register from that location. So what I'd like to do is I'd like to actually enhance my My hub to now pre-build that We're looking here at the the implementation for the hub cluster itself So if we think of the the back stack hub is having key components, we've got our go CD backstage caverno vault Cert manager so we can self sign everything and then importantly if we look here We can see that we were pre-creating this clusters application What I would like to do is enhance my composition to now include the applications composition in Application in there, so we're gonna copy did I copy? I did not copy Highlighted but did not copy We'll paste this here No They do twice in a row Isn't that the rule in jazz if you make a mistake do it twice? Paste haha, there we go Everything is off by one There we go So we're gonna call this the Argo applications application We're gonna call it applications We're gonna look at the Applications path and one of the things you'll notice here is this repo URL is It's just nonsense. We're using the patch and transform mechanism within cross-plane to take The URL that was passed in via the API we built the hub API we built and we're going to patch it in replace it here if we want to look at the the API shape itself we can we can actually query that straight out of the Clear that there we go We can actually look at the shape of the API by asking Kubernetes to Explain it to us. So we're going to explain the hub spec parameters So these are the parameters that we passed in when I when this thing provisions when the hub provisions step one Spin up local cluster step two install cross-plane step three Deploy a hub and cross-plane will go out and do it and it needs a bit of information Argo CD config backstage config these are the Just the host names For ingress control in order to configure those charts properly and then importantly this repo URL So we pass in the the repository URL URL where everything's going to be Where everything's going to be connected so we can go back here We see that we've finished updating our composition to now use that same information and to deploy this additional application so all we need to do is Update our composition within the control plane now There's a whole life cycle for packaging and deploying into a container registry You can use Argo in order to synchronize that back down into your control plane or you can cheat and just Apply directly. So if we want to apply the hub We can just update our hub directly And what we'll see is the Application will get updated that the new resource will get pushed out. We can actually have a look at that if we want to let's have a look at the Resource objects is what it's called objects. There we go So we can see our new application here, which is being which is being provisioned and we can see the Values that we put in there directly successfully created our new application and we can see that it's been Put in place right and again because I didn't make it auto sync. It's out of out of date, but now we can synchronize it and The guest book application which was synchronized down in there will now deploy into our Into our target cluster. We can synchronize this one layered into the The remote endpoint and if we go back while that's starting, let's grab our cube config download that And we're gonna we're gonna cheat again We're gonna use some port forwarding because I didn't want to set up ingress on all these clusters But if we pull down from downloads to Cube config We'll grab our service We can see our helm guest book here. This guest book is is ready to go Let's forward this port locally to let's make it 180 does not want to send that You know, this is what you get for doing it live But ultimately oh, it's still starting. That's why Too fast for the demo See here. Oh, we even have our scan report from Caverno telling us about all the things we did wrong with this deployment Luckily, we have it set to We have it set to audit, but it's letting us know, you know, are we are we meeting the pod security standards are we meeting our different deployment targets and I Didn't call this out earlier, but I should have done we actually have the Caverno generate policy in place that is Making sure that these app sets get deployed into the new cluster So when a new cluster comes in we're deploying these app sets into those clusters We're layering in applications. So each of the technologies in the stack has its part to play And there's a there's a natural assumption Which technology can do what? But then you find that they can do all sorts of new and exciting things around that when I was first setting up the relationship between cross-plane and Argo CD I Needed to be able to register those cube configs into Argo CD and to be able to Set up the relationships with these additional clusters. Oh look the demo cluster finished provisioning But I couldn't do that because Argo has its own format for How clusters are registered and how you set up the connection information You can't just drop a cube config in there and call it a day. You need to provide specific information in a specific format I didn't know how to do that So when I went looking for a tool it turned out the tool I needed was already in my hands And I'd been using it to enforce policies on my cluster. So a Caverno generate policy stands in between cross-plane and Argo in order to make sure that the Inputs are correct and the outputs are correct and and matches up with what each technology is expecting And we can actually see that back here in the in the composition where we set it up if we collapse this down this Generation cluster policy here right so this this policy is responsible for watching for cube config secrets and Transforming them into an Argo friendly Config that can then be used to set up the connections into everything So we've gone around the circle twice we've we've Asked our catalog to build us a To build us spoke cluster Once that cluster was up and running we went back to the catalog and asked it to deploy an application You can use The standard scaffolding to create whole new applications to and load those into the environment at this point You start to transition from well how the hell am I going to build that to what else can I build? And I really want to stress None of my time was spent actually doing this stuff It's a handful of the ammo files for Building the API's when I decided I wanted a hub API This is all it took to to build that out and anytime I needed a new component It was just as simple as what we did earlier For the policies these all exist right so all of these Caverno policies that are enforcing correctness on the cluster size on the node counts on the Pod rules they're just out there Argo has been synchronizing things from get into control planes and making sure that is done Efficiently effectively and you know with Rollouts and all the different workflow that goes around that for ages right and it just works and all it takes is a little bit of YAML to say oh, yeah watch this repo and make sure things happen properly So the effort required to actually build it up and tie them all together was really Remarkably easy the harder part was getting this bash script to do what I wanted it to do So at that with that I think we've covered everything we wanted to cover demo wise So just a couple things to wrap up with What's next obviously? The bootstrapping that's happening here is being driven primarily by this install script It's all aimed at a local Deployment however that hub composition nothing in that requires you to be running locally You can deploy that into any Kubernetes cluster you want you could even host the root control plane External to that cluster and it'll layer in everything you need into that environment additional use cases right so Name space as a service workload environments as a service and then starting to load in Additional capabilities from your control plane So I just recorded a video for a customer this morning their use case is we need to provide Object storage as a service and so how do I how would I as a platform engineer? Build out that capability and ensure that I'm building my S3 buckets with the right set of policies with the right set of Configurations, but only require the developer to give me a bucket name as I go And you can build that out so that that Composition already exists and you can define that API and roll that out within your application So all all these different use cases start to become available to you Other things that we wanted to add were around yeah around things like you know providing adding dashboard So we're using several different components making sure those components are up and running as a platform engineer You don't want to know there are go CDs healthy kevano is healthy You know cross plane providers are up and running so having that kind of central, you know dashboard using Prometheus or something like Prometheus or Grafana, you know that would be something we would add next and then potentially integration with other other You know projects right so that's those are things we are looking at next, but you know the rip was already live So please you know tried out provide feedback You know file issues We'll continue to evolve this we know working with You know the QT team the above team And you know the backstage community Absolutely, yeah, we you'll notice we already snuck one extra letter in there with with vault right so it's This can get you can add letters all day long But this is really the core of what what you need to go full circle And then please please please if you liked what you heard Leave feedback on this session And if you didn't that's okay, too. You can you can leave the feedback Thank you very much any questions Yeah, I think we have microphones here There's a mic here Yeah, this this is less of a question more of a comment, but I really just wanted to say thank you For this idea and this stack I've been participating in the working group on platforms for the last couple months as part of the maturity model and some other things and The idea of reference architecture for platforms has come up a lot. There's been some discussions going on and For you to sort of say there are so many tools available in the landscape These four you can draw a nice line around them and deliver a ton of value with this package It works really well together. I think it's extremely valuable and As an end user like I'm gonna go home and try this like these are already tools. We're starting to use we're using our go CD We're not using the others yet, but we're trialing backstage like I want this package as a whole You know, I don't want to have to solve it on my own. Yeah, I appreciate that you've provided this like Which trap? Absolutely. Thank you. Thank you. Yeah, thank you Yeah, thanks for the talk I had a quick question on do you do like time to live in Coverno? How do you control the spread of this if I'm pushing something out to a cloud? I want to know I'm not gonna build up my bill Yeah, great question, right and Coverno. So, you know, we mostly talk about You know validation mutation generation policies, but Coverno can also have that's cleanup policies So you can actually set up, you know Exactly what you're talking about if you want to clean up certain resources be after a certain period of time that can be done as well so one thing to mention about Coverno right a lot of times it's used for like, you know Security checks, but it's a great tool like, you know, more pointed out for automation and what we've heard from a lot of users is they've had they've You know stop writing controllers custom controllers and leverage Coverno for that kind of automation right through webbooks and so on Yeah, absolutely. Thanks. So I guess So I'm into the first comment. This is amazing I'm familiar with backstage in Argo, but crossplane is new to me and a big problem that my team has been facing is You know backstage is great for spinning up stuff that looks correct at the time of creation But what people are calling the day to problem right of hey I've just realized I need a new type of user in this reference application or a new type of permission or whatever It sounds like crossplane is a solution to that right? I would define a type of application in crossplane update the definition in crossplane and everything else would magically Filter out. Am I reading that right? Yeah, absolutely So for those we've had a few different talks and it afterward the fact you should absolutely go back and watch the talk with Jared and Clamont from Consensus they did a great one about streamlining infrastructure with crossplane Two days ago. I think it was on Tuesday Watch their talk because it was it was excellent but at a base level crossplane is a The answer to the question of what if Kubernetes but for all the things there's essentially two abstraction layers There's a set of providers that integrate with any external API We're using it here to drive Azure and AWS and to drive the external things But we're also using it to layer in Helm charts to layer in other Kubernetes objects and to then expose it And that's where the second abstraction layer comes in is to expose that behind a custom API I define the shape of the API based off of that definition and then I can have multiple different implementations behind it where and so I can have my standard implementation everybody who wants a cluster gets a cluster except for these guys They're special and we're gonna build them a one-off composition that they can select So they're still using the same API But they're providing some kind of metadata usually an annotation or a label or or even if it's if you're building out Choice directly within the platform. You'll make it a parameter in the spec But you have sort of those different layers, but if it's a one-off, it's usually a label You'll define a label on that type that then allows the back end to choose the right Composition that has the correct behavior Publication of a new version of a cross-plane whatever the entity is called. Yeah, that's the idea Whatever, we'll get picked up by consumers. Exactly. Yeah, the default behavior is to auto update You can change that to manual update But the default behavior is if I put a new composition revision out there, which is what we saw earlier I changed the composition. I cube CTL applied it into the cluster and then a new composition revision was generated and The any instance that had been tied to the old revision jumped to the new revision and started doing that new behavior Yeah, this is staggering. Yeah, it's it's great. Yeah Yeah, I think we are out of time, but we'll be here if you guys have any other questions any other questions meet us up here Thank you so much You