 So welcome everyone. Today we are from Fairwinds and today we're going to talk about how to deploy in CICD as part of building a Kubernetes platform. So I am Stevie Caldwell and I am a tech lead here at Fairwinds. I've been here for a few years now, a long history of working my way up from desktop support to sysadmin, to network engineer, to DevOps to hear Fairwinds where I primarily work in Kubernetes and help support our open source projects. And I'll hand it over to Andy to talk about himself a little bit. All right, I'm Andy. I'm the CTO here. I'm also a reformed sysadmin as I like to say. I've been doing Kubernetes for, I think I'm up to six years at this point and I've been with this company for five now. And I'm an author and maintainer of a bunch of our open source and just love talking all things Kubernetes and these days platform because that's the new, the new hot word. It is. All right, so a little bit about Fairwinds. I'm going to read off from this lovely, this lovely slide and put on my radio boys. Fairwinds provides software for platform engineers running Kubernetes to standardize and enable development best practices, standardize, automate and enforce Kubernetes best practices to ship applications faster. That's that. All right. Like Andy said platform is like the new hotness. And I'm not going to read this this slide off to you because I press that you can handle this on your own. So platform is essentially like a way for allowing devs to deploy their artifacts quickly without having to pay too much attention to like the underlying infrastructure and abstracts a lot of that stuff away, but also abstracts it in a way that it's still like safe and like the false and things like that. There's a internal development platform is like the, the sort of acronym or one of the terms that we use to talk about platforms. I think that is a little confusing because IDP stands for a lot of things in my world right like identity provider identity provider intrusion detection and prevention. So, I think we need to come up with some more acronyms but that's essentially what it is and I read something that said that a good internal developer platform should abstract away infrastructure decisions enable self service environment builds integrate with existing continuous delivery and integration and deployment processes and assign role based access controls. So it's essentially like a self service layer for devs, but providing some like good guardrails and security features so that nothing breaks, hopefully right. So, when we talk about a Kubernetes platform. We're sort of we approach it as four different areas. Right. That's a next slide my friend. Sorry, I'm moving on so fast. There we go. We're talking about like four components when we talk about like a platform in the Kubernetes world. We talk about add ons. So those are like the default tools that your platform engineers or you know DevOps folks or you know whatever you want to call that side. The sort of default things that you want to install in your clusters right so this is how you want to handle DNS how you want to handle certificates how you want to handle exposing your workloads to external external clients or sources. And we talked about that we do have like a video where we talked about that installing add ons with get ops. And then we talk about and then the other portion, a component of a Kubernetes platform is governance, right and that is how you essentially secure your platform, secure your environment, and try and prevent bad things from happening So that could be how you enforce or what you want enforced for resource request and limits on workloads that gets admitted to your clusters and capabilities Linux capabilities that you want to either enforce or prevent on workloads, labels, name spaces that people can deploy to all sorts of things that falls into that whole governance category or component. And today we're going to be talking about the deployment component right so these other two sort of focus more on like I think the DevOps platform engineer side of things and today we're going to kind of look at things from like the death side of things like what's it look like to deploy into an environment that has these other things going on for it right. Andy has set up a wonderful demo for us. He's going to walk us through that. But before we do that I did need to ask Andy, an important question. The question is, you know why the mobile phone was wearing glasses. I do not. It lost its contacts. Nice. That was a good one. So with that I'm handing it over to Andy to drive us through this demo. Alright, see if I can find the right screen this time. Oh, don't answer that polling question yet as later. Alright, so I'm going to spend a little a decent amount of time talking about sort of the whole setup here because there's a lot of things going on. And so what I've done here is I have essentially built what I would see as you know sort of a good starting point for a platform it's got a decent amount of policy in place it's got add ons it's got some self service stuff. And so I'm going to talk about how we've tied all these concepts we've talked about in the past with you know deploying add ons with get ops and adding policy enforcing policy and pull request reviews and all of that stuff and try and tie it all together into, you know, as Stevie said I'm a developer and I need to deploy an application. So I have actually removed a lot of my own permissions from this cluster, because typically I would be the admin of the cluster. So that's a Kubernetes cluster. So we have, we have a Kubernetes cluster but how do I get to it. How do I access cluster what's going on here what am I allowed to do what am I not allowed to do. And so we have here, sort of an example setup of how you might set this up so I have, we have vault internally at Fairwinds and vault has the ability to generate to hand out AWS credentials. So in an EKS cluster, we can associate those AWS credentials with specific groups or roles in the cluster. And so I have run a command that we have so like you might have like a lightweight CLI utility or like a bash script that you hand out to folks that basically goes out to vault and it gets some credentials, and it puts them in my environment. So I'm going to do that. And we see here that I have an AWS identity right now that is associated with this, this assume role team one. I just gave it a name as if I was on a specific dev team or however you wanted to break it up. So now if you know my developers know a little bit about Kubernetes they know how to do a QCTL command. So we could do, you know, things like QCTL get notes or wait. No, I can't because I'm not allowed to do that. But we could do a QCTL get pods and start to see what's running in this cluster. But the beauty here is I've given access to the cluster to my developer, but if they wanted to do something like delete the nginx ingress namespace, we're going to stop them from doing that. So the first tool that is involved in this platform here is called RBAC manager. RBAC manager is a Fairwinds open source tool that makes building RBAC bindings easier. So if we take a look at our RBAC manager, RBAC definition, which is the CRD associated with RBAC manager, we'll see here that we have this RBAC binding for team one, it's associated with the group team one. That's configured by the fact that we're coming in as the role from AWS that of team one. And we see we have the cluster role binding view. Now you might think, oh, we don't want to be able to view certain things, just some things in the cluster and luckily the built in role view is very smart about that. We can't get secrets across the entire cluster that would be probably a bad thing. So we, we can't get secrets is we're really limited to just being able to see what's going on in the cluster and I think that's valuable for folks for developers who want to know what's available in the cluster what's going on there. They don't necessarily have to do that. Then the next really cool thing that I really like about RBAC manager is that it can do dynamic role bindings. And what that means is that we can as administrators create namespaces that then get RBAC bindings associated with them automatically as soon as they're created and labeled so we have here the role binding on the cluster role edit that is bound to the name space selector which you can't actually do in Kubernetes. This is what RBAC manager does for you. Any namespace that matches the label team one admin will give the cluster role edit to that team one group. So if we look here. Yeah. I had a quick question. So is this a dynamic. Well, you might be getting to this. So is this a dynamic thing where if I created so any even after I've applied this to the cluster. Any new namespace that I create will fall under this rule as well. Yes. Okay. And my second question and my immediate thought was what happens if I just label a namespace with that with that label team one admin. Great question. So, because of my already existing very limited permissions, I can't say label the cube system namespace team one equals admin because I don't have permission to patch namespaces anywhere I don't have any edit permissions anywhere. And nor can I create namespaces if I wanted to try to create a namespace team to I can't do that either so there's no available privilege escalation here. Very good question. Obviously have to be careful about that when doing our back right we don't want privilege escalation just built into the system. So as an administrator so in the bottom half of the screen I'm an administrator in the top half of the screen I'm a developer. So if as an administrator if I create the namespace team one a and I label that namespace team one a team one equals admin. So I can do an RBAC look up on team one RBAC look up is another fair winds open source tool. That lets me see that team one has edit permissions in this in this namespace team one a because I just created that label. It's definitely dynamic it happens on the fly so as an administrator we can manage those namespaces however we want. We can do it manually we can do it with another tool we can do it with get ops like however we want to manage the namespaces but now we don't have to manage our back definition our back bindings for every single namespace we create so it's very easy to just say hey developer needs a namespace boom here you could even build a self service way to do that if you wanted to with approvals or however you wanted to do it so I'm going to just double check. Team one equals new. We're going to relabel the namespace. And let's do our RBAC look up and all of a sudden our edit permissions in the team one a namespace are gone even though the team one a namespace still exists. The first thing about giving developers access is giving them the right amount of access so that they can do what they need to do but not allowing them to do just you know anything as in delete your fingers control because that might make a headache for you know the other developers working in this cluster. So I have edit permissions here in the team one namespace my context is currently set to the team one namespace, and we see we can get the secrets in this namespace. I can delete secrets in this namespace if I want, which I actually want to. And do whatever we need to within this namespace. So we have the RBAC that's involved here. And then the next thing that we'll talk about is add ons. I think, you know, Kubernetes gives you that baselary API, but now we need to do stuff on top that Stevie alluded to this which is like, we need to be able to get traffic into the cluster we probably have apps that we want to expose to the world we probably need certificates we probably need DNS. We need auto scaling maybe we need, you know that's built into Kubernetes but we can do some better auto scaling. We need to be able to get, you know, new nodes provisioned as we add workloads into this cluster. And so that comes into where we have where we install all the add ons in the cluster so if we go back to the screen. We have in this cluster, Argo CD, which is a get ops tool, and it is managing a whole slew of add ons. So we have multiple projects within our Argo CD configuration. And so if we look at the infrastructure project. We see that we have, again, all these add ons we've got ingress engine acts to get external DNS. We've got the CSI driver because we need that we need a load balancer controller so if people need you know TCP based services that can do that instead of having to go through an ingress and Argo CD itself here obviously. And so you may be wondering at me as a developer like why can I see all this why is this here. I need a view. And why is there this delete button. Like didn't you just say I can't go delete ingress engine acts. And the answer to that question is no, I cannot because we have Argo CD set to allow anybody to view the infrastructure, but not to delete it or modify it. And this is valuable because sometimes like, you're using an ingress engine acts object, and perhaps you, you know, you're not sure if a request is making it through the ingress controller to your pod and you need to be debug that so now I can as a developer, actually go view the logs for the ingress controller. That's actually a problem I'm sure what's going on there. We demo live demos for sure. So we've given all of the necessary access for them to do that and then, you know, again you may be wondering about secrets here. Lovely thing about Argo CD is if we have secrets, and we go look at the live manifest, we're obfuscating those so giving the ability to view what's going on closer I think is super valuable. Do they have to look at it? No. Yeah, that is that is one of the things I always remember hearing that you know devs rightfully complain about is when something goes wrong, not being able to even began to like troubleshoot themselves because you have something so locked down that they don't have access to even, you know, try right so like we would complain on the one hand like we have to like, you know, troubleshoot this thing but like we don't give devs the tools to try to do this for themselves so this is super, super important I think. Yeah. Definitely totally agree totally agree you know and if like the ingress controllers just totally down developer can go see that and be like hey, is this what's causing my problem instead of hey, my stuff's broken and I have no idea why. So yeah, empowering and enabling rather than restricting. All right. So let's talk about deploying. So I'm, I'm a member of team one I have this app that needs to go into this cluster. How do I how do I do that how do I get in here so we're just going to create a new app. You know ideally we will have documented this process and handed it out to people and I'm following a document but since I made the process I'm just going to follow it. So, I'm going to call this Andy's awesome demo. And I'm a member of team one so let's put it in the team one project. And let's see what this self heal prune thing is all about the Argos supposed to like make everything better so let's try that out. I'm not I don't even know what all the stuff is so I'm going to ignore it. And then I have we have this setup here so we've already populated the Argos setup with credentials for any of the Fairwinds Ops. GitHub org repositories as a bunch of different ways to do this with Argo but basically just providing access to like these code repos automatically super valuable. I actually have this in a different org my personal org for various reasons, but we're just going to hook it up to that GitHub repo, and go ahead. This is really big this is part of what like the platform team was set up for you right they would set up the GitHub, they would set up the GitHub repository stuff and the credentials to allow Argo to talk to repositories that you need. Definitely, definitely yeah if we go look at the Argos CD configuration, we can see here the repo Fairwinds infrastructure repo is added here so Argo does this via secrets that have specific annotations on them. So that are specific labels so this is a repository secret. And then we've pre populated the credentials and it's actually split between two different objects. So you could, in theory, I believe here when I go to add a new app, I could say, you know, Fairwinds Ops demos and it would go ahead and add that repository for me, since we already have the tent it's called like a repo template in Argo CD that's been deployed with the credentials. It's definitely something that you know you have to configure ahead of time to make it a little bit easier so that you know you don't have developers trying to create access tokens that then access and all that good stuff so yeah, so. And you're doing this apple you're creating this application which is a specific type of objects in Argo CD in the UI, but you could also set it up so that your developers can create applications using YAML and the whole thing. Yeah, definitely and actually I was thinking about it. You could totally, you could definitely do both and then you could also even let teams build their own app of apps for Argo CD so like, I can do this one manually that points to my Argo repo. That is just a list of other applications that it then just goes and like adds more apps so the team could manage that however they wanted to. I think that's you know part of the self service ethos series like you have options you know you can do a much more advanced option and you can do it this way. It's still, you know, the application manifest here won't be a, you know, as code, necessarily, which is not ideal, but it is still an object in the Kubernetes we could, you know, save that as code we wanted or whatever so. Yeah, yeah, I've thought about that a lot actually because the there's a lot of different ways you can deploy with Argo CD. So in my demos repo here. We have a demo app. And it's got some objects in it so I'm just going to point it at that directory. And then we only have access to one cluster that's in this list, you know, if there were multiple clusters if we were doing more of a push model. There might be more clusters involved here. We already know from our RBAC demonstration that we only have access to the team one namespace so let's just focus on that. And then we will do nothing else. And so we just hit create here. And we can filter the projects down to just our project and we'll see we've already created Andy's awesome demo here. And we're we're deploying objects. That's great. It's starting to sync because I hit the sync auto sync button. And we have a failure. What's going on here. Admission webhook insights not fair ones.com denied the request privilege escalation should not be allowed CPU limit should not be more than 20% higher than the request. And it should not be running as privileged. Oh, this is the terrible workload Andy. What have I done. What have I done. What have I done so this is where we get into the policy and guard rails portion of this so you know we've given team one the ability to create pretty much whatever they want in their name space. We've kept them from deleting everything outside of that name space and getting into secrets and things like that they don't necessarily want to have access to. But now there's like 1000 ways to deploy something to Kubernetes wrong. Let's forgive my terrible English there but like, there's a lot of ways to configure your deployments wrong improperly. And so, if we take a look at my demo app actually let's just go into the directory like this a little bit bigger. And we take a look at this deployment. You know I could have just said, put into really basic deployment right and just like left out a whole bunch of this other stuff. I had a security context, and it turns out this would be the case privilege escalation would be allowed we would be running as root we would be doing all of these things that we strongly recommend you don't do but are really easy to do because they're the defaults and Kubernetes. And so we have a whole series of policies applied to this cluster that prevent you from doing that so if we go back to our message here we'll see the first one is that privilege escalation should not be allowed so let's go ahead and fix that. And then we can't be running as privileged that's probably the most egregious one in this in here running a privileged pot it is like just, you know, keys to the kingdom here. And so, it said my CPU limit should not be more than 20% higher than my request. So I'm going to change this to be just the same as my request. But also, you know, this thing needs more memory so let's just, let's just bump that up. And I think that was all of the issues. So I'm just going to get those all committed. Fix my issues. I don't know if I have enough time for that today but I was going to say. So we'll just push. Yeah, right. So we'll just push this up, and I'm going to be impatient and tell it to hard refresh. And let's see if we can get this get get our deployment out there and at this point like, especially if we have these policies documented somewhere. I as a developer I haven't even even had to talk to an infrastructure engineer to try to deploy my app right. So what I have is like one doc that's like here's the basic steps, and then I've got some decent error messages that said hey you can't do this, this thing, and. Oh right. And so that that's the process sorry I have to terminate the previous sync because it failed. And then we'll do a new sync. After we refresh. Oh, we've got another we got another error. Oh my memory limit I can't set that 21% higher goodness y'all are restrictive here, but this is when the devs start shaking their fist and pounding the desk in front of them like the platform team. She's so many restrictions. Alright, so you know it would have been really nice if I didn't have to wait till it's deployed to find out about this problem. Don't you think. Yeah, I do. I mean because this whole going back and forth and redeploying and redeploying is is is annoying. Right. Yeah, it's a lot of effort and there's a lot of extra time it'd be great if you could catch this farther left. Agreed. Agreed. So let's, let's make a poll request for our new change because you know we've gotten past the POC stage we really need to start making pull requests on this app we can't just be eating everything into into main like we have been. And let's go take a look at our pull request. All right, oh we've required check here fair winds insights hey that's that thing that was blocking me earlier. What is what is this what is this fair winds insights thing. Oh, look at this. Okay, I fixed some action items so deployment demo basic demo great name, Andy memory limit should not be more that that's that error I had earlier. Yeah, yeah, and it's been fixed. So great that I got the feedback here it says it's all good to go hopefully if I merge this we're good. Yeah. So obviously you know I'm acting a little bit here little little tongue in cheek so the first time you did it I was actually like wait what happened what went wrong and you were really convincing. I mean that happens a lot of my demos so it's totally fair. So this. This is all being done with fair winds insights so fair winds insights is the the engine that's powering all of these guard rails, and all of this automation behind the behind the platform here so our CD plus insights, and the other open source tools I talked about so if we go into our cluster, and we'll look at the admission controller we can actually see our previous failed deployments here so we see here failed deployment demo basic demo in namespace team one. And we look here memory limit should not be more than 20% higher than the request if we look at this one. It looks like we did that one twice. Argos probably retrying, but in general we can we can see the reasons that we were failing here and if we go back to our PR will actually see there's a link to view this report in insights, and that goes to over to our user torus tab and if we look at the Superman junior demos repo. We can ignore that one and figure out how to make that go away yet but we can take a look at our branch which is fixed memory. And we can see here this fixed which is the exact report we got in our PR, which I didn't have to go log into insights, it's just going to do all this for me as an administrator. I have all of this configured so if we go take a look at our policy. And we look at our opa policies you can see these policies define saying you know the various things that that I've tried to do here which is like the memory limit being too high, the CPU limit being too high. If I had tried to create a host path mount that would be blocked, but then we also have a whole bunch of built in stuff so those ones I wrote myself, but we have the Polaris ones as well. So for example, if I had tried to just actually let's just do it. Let's just say you know what I'm tired of dealing with this 20% resources thing like let's just delete the resources block, because I would totally do that as a developer yeah I'd be like this is I mean I am tired of this. How do I get around this I'm just going to delete it. All right, so we'll push that up into our PR and oh I already merged that. Okay, I'm just making stuff up here as we go let's go look at this new PR and these PR names actually also very much will sometimes look like that. Random all caps like our depending on how many times he's iterated on an issue. We've all had the chain of commits that just get increasingly more aggressive. And we try to fix a problem that's particularly frustrating so, but now we have for new action items that we've created with this PR saying that hey you can't remove the CPU and memory requests because like, that's a problem in a cluster. This is about the time the death there was a computer out the window by the way. We've documented all of this we've given guidelines up front we provided templates that suggest how to set them and things like that so ideally this is not you know just coming at it cold and having to go through this process but we're showing how you can sort of put these all right so if we go back to our demo app, look at that. Everything is deployed. We could see we've got a scaled object here that's a key to thing. And actually we'll talk about that a little bit because now I've got you know I've got my app deployed that's great hopefully you know we have this ingress it probably has some sort of hostname. So, to our demo, see it working hooray. Looks like I only have. Why do I not. There it is. Why do we only have one pod. Yeah, that's interesting thing. Maybe there's something wrong with our scaled object we can take a look at that in a second. So we have our app deployed that's great. And we can go into insights and we can see any particular actions that might be specific to our cluster here so we can take a look at the sandbox cluster and we can filter down. Like if we're really curious what the other stuff that wasn't blocked that we were just allowed to do. We can take a look at that not a lot going on here but some small issues, but then we can also, you know if we're not certain that our resource requests and limits were quite right we can go over to the sufficiency tab, and we can take a look at the name space which may not have any data in it because I literally just created the apps in here, but we can take a look at, you know, an example workload here over time. And it'll show us what it thinks our request and limits should be so if you know my boss comes amazing hey you know you always spend too much money with this app, I can come in here very quickly and easily see the suggestions for that. There's things in the cost tab as well related to that I'm not going to go into it too deeply, but it's all available here. Um, so let's let's take a look. Got a few minutes. Let's try and figure out what's going on with our scaled object. Oh we have two pods now that's a good sign. And so Keita is another open source project not a fair ones project just an external another project I remember who works on that. Um, but it's it manages your horizontal pod out of scalars for you, and I really like this as a tool for for operators to put in the cluster for folks to use, because it's much simpler to use lots of different metric sources for your horizontal pod scalars than just raw HPA is. So if we take a look at this scaled object. This is fairly simple. This is you know, I'm targeting that deployment. I've got a min replica count of two I've got a max of 20, but the cool thing here is this trigger. So normally when you first create your first HPA you're going to do it on CPU, you're going to say, I'm using I want to use this much CPU target of 80% scale up and down. And that's sort of really, you know, for most applications just a proxy for performance it's not really a great metric to scale on because there's no, like the end user of your application or product isn't going to experience high CPU as a negative effect or a positive effect right they might see latency at the as an end user right like high latency maybe a reason to scale up so anyway. And if you're if you're running a web deployment, or if you're running like a web service of some kind like even a little latency is enough to, you know, quickly have a user quickly just click off of your off of your webpage like people don't want to wait for that stuff. So this is certainly what you're about to talk about is certainly a better measurement for how to scale up. Sorry. You're good. I totally agree. And that's a great point like web services. If it's more than you know, a few hundred milliseconds, I am out like no. We're impatient human beings for sure. So what this allows us to do is we've installed Prometheus is one of the add-ons in the cluster. And we can scale on Prometheus queries just right out of the box. There's no extra deployment or anything like that. So I as an end user knowing this from the documentation that I was so graciously provided by my ops engineer know that if I go to Prometheus in our sandbox here, I can actually make queries. So I'm worried about my demo basic demo, probably some sort of requests. So nginx ingress controller requests. What do we have here? It's a whole bunch of stuff. But we can say we are looking for the exported service. I'm really enjoying you putting together Prometheus query in real time. That is always fun. Right. Although the new versions of Prometheus make this way easier. So now we have let's do a rate on that. All right. So now we have a rate across all three ingress controllers. We'll just sum that up and look at that. Now we have a request for every five minutes. We can bump that down to one if we want. Let's do five make a little less responsive. And we see here, you know, we can look at the graph of that it's been going up because I opened this and it actually it starts pinging. So if anybody wants to just hammer at demo.sandbox.hillghost.com feel free. Maybe I'll actually do that. There's a ping endpoint. Let's just fire it up. And then we can increase our latent or a request rate here. I believe there are latency metrics available. I'm not certain. That's not super important. But we see here, Internet Singers controller requests host. I put exported service either one works. And I want it to be about 20 requests per pod. So what key does is in our team one namespace, it goes ahead and creates an HPA, and then also serves the metric up to that HPA. So you don't have to deal with custom metrics providers and all these things and kid has a whole bunch of different plugins. So it's a great tool for a platform to give your developers the ability to scale better and more intelligently. So ideally, we look here. We've already scaled up or up to three pods. And everything is green. So, um, yeah, oh, there we go. Now we're taking off. Got a few more pods. It's from our. Oh, I'm bringing this twice. I got to love that curve. I may have hit it a little too hard. So yeah, so we've gone from zero to deploying an application with ingress. You may also have may also have noticed that this demo endpoint is HTTPS. And I am getting a valid cert and it already created the DNS for me on the fly so I could, I could rename that DNS name if I wanted and redeploy it and I'd get the same thing. Because we have external DNS and cert manager running. I as a developer didn't need to know that all I needed was the documentation to tell me that this is what my ingress object should look like. This annotation to get a cert and this information to get a cert and then set your host name. And there you go. So some, you know, I could probably write a two page doc on how to use the platform and you'd be able to onboard apps into this platform. Now, obviously we're missing some capabilities. We don't have databases yet. We don't have, you know, unless you wanted to run it in Kubernetes. We don't have, I don't know, maybe you need a queue or something like that but you know, there's things that we can layer on top of this but this is a great starting point, a great way to enable people to deploy. So, that's awesome. That's a to Z walking through a development process on a platform and pretty painless all things considered right like there's, like you said, you, it's predicated on you providing good documentation. Because it's not, it's not going to be cool to throw up guardrails which are great, you know, an important, and then not show people how to remediate those problems when they pop up. And it doesn't do you any good to shift left. Again, if you're not showing people how to fix that stuff. So, but with all that put together. This looks like it creates a pretty seamless way for developers to really just control, control their process when it comes to deploying their applications, which is great. So question for you this is philosophical a bit I've been having with some folks about the sole platform thing because like on one end you have just pure Kubernetes, right, like just just plain rock Kubernetes got the API. We've already talked about how that's not adequate, right you're missing some functionality also don't have like, you know, good security and all that stuff. The far end over here is what some large companies have done some are famous for it, which is create an entire abstraction layer that is you as a developer write code and do nothing else maybe right like, you know, one little file that's specific to your company specific to your platform, and then obfuscate everything in the middle. I've shown here today which I think is much more of a middle ground right there's guardrails in place that don't let you go over here. There's a lot of things in place, but you still have to write some Kubernetes email. You might have to even write a helm chart. But like, and I'm curious where you think you know the right place to land in that spectrum is. I feel like, so my opinion is that landing farther to the, if we have the like the big restricted companies over here and we've got like the wild west over here. I feel like going a little farther left of center is is where I would fall right because when you get over to the very far end of that with the big corporation sometimes do. It does create like a lot of friction, and makes it a little more difficult for developers to get you know there's having a hard time getting your work done because you have to take care of everything and there's having a hard time getting your work done because you're him then by some very you know strongly held opinions and restrictions. I feel like those platforms. And I tend to think of those more almost like as a platform as a service kind of thing versus a developer platform right because it's here the tools and things that you use. And this is how you interact with the thing, but I feel like when we create these kinds of platforms we're kind of building it around what the devs already use so we're incorporating the tooling and we're already like, we're working around like the workflows that they're familiar with already, and just adding some, almost like some templates on top of it really just. So I think that that, I think that's that's better. You don't want to get too restrictive, you know because that then just makes it introduces a whole different set of difficulties doesn't remove difficulties you know I mean. I agree. And also like, you know there's there's two arguments that people make I think for, you know, going the full blown abstraction layer. One is, you know, I don't think developers should have to learn Kubernetes, which that's an interesting one because like in somebody, one of our customers actually made this point recently which is like. But if anybody out there is going to say like any developers like I know our companies is Kubernetes I want to get better at you know how we use how we deploy our application. They go take a training course on Kubernetes if you've built this whole abstraction layer. It doesn't do them any good. They don't know, you know where the where the connecting lines are. So that's a massive effort. It's just so much work to build something that is flexible enough and big enough to do that and some companies have the resources to do that but not a ton than I know. No, I tend to agree with you completely is you know we, it's okay to have to learn a little bit of Kubernetes. Let's try and bring the floor up make it easier so that you know there's a happy path to deploying and we can unlock all this other stuff later, or you know open it up a little bit if there's more complex things that need to be done and we're not so into our to use your term into our process that we can't expand without you know months of development effort to fix our platform so right, I think this is, this is the, this is the way. I agree, I agree. And I also think like, you know there's, I think there's benefit to having like a basic understanding of how the things that you are developing will run on the platform that you're running them on right I think it's there's some importance to knowing how your application will run on Kubernetes, right because it's not the same way it's going to run on something else right so I think there's a benefit to that as well because there are going to be some things you are going to want to change an altar in your application to make it run more smoothly so you can't hide from it. So you might as well like have a basic understanding of it and yeah. I do agree that like it's not worth like having to learn a whole like, like really deeply learn a whole other set of tools or concepts just to get your work done is an ideal either right like that slows down the process. So, exactly, exactly. So let's not hide it but not necessarily make it so you have to be a Kubernetes expert to deploy right right, but we don't need to hide the fact that we're running Kubernetes under here. We're not exposed, you know, some of the really great stuff that Kubernetes gives us and all these other tools agreed, agreed. All right. We talked about everyone's insights playing talked about platforms, but lots of lots of good stuff in there I think we only scratch the surface of all the different pieces of insights that you can use very much focused on the platform today, but you know cost management can be all those great things built in and you can tie them into your platform. And then, oh right, there's a free tier of it, we didn't talk about that so insights everything I did today was totally included in our free tier of insights you can use it on up to two clusters. And so you can totally recreate all of the things I did today if you really want to using insights on the free tier so go check that out if you are interested, and then all of our open source on our GitHub page which is incorporated insights as well. That's right. All right, well Stevie, thank you for hosting and asking all the good questions. Thank you for hiking and doing all the good tech things. I try. And thank you everybody who showed up today to listen we appreciate your time and we hope you all have a great rest of your. Bye.