 Hey Thomas, I cannot hear you, not still. Are you able to hear me? I'm still not able to hear you. Am I speaking? Yes, I think. Okay, I see you responding so it looks like you are able to hear me. Can you hear me now? Yes, now I can hear you. Oh, I lost you again. Okay, I think I have some problems with the microphone. But now it works. Now it works. Yeah, now I'm able to hear you properly. So exciting. Are you based in the Eastern time zone or? I'm in Europe. I'm from Austria. Oh, okay. So you are actually midday probably for you, lunchtime. No, it's already, it's short before evening. Oh, it's almost in the evening, okay. So 5pm in my time zone. Oh, wow, I didn't realize that Austria is that much ahead. I'm in Austin, Texas. Yeah, so it's just about 10 o'clock morning right now. Early in the morning. Yeah, those things you have quite a packed agenda today. That's great. As always, lots of cool presentations. Yes, I hope we don't run out of time, but I posted the link for those of you who don't have it so you can join in folks. Do we have somebody from the conveyor approach here already? I'm not sure whether they want to present today or another time. If not, they already have two presentations scheduled. I just want to make sure that somebody join. Hi, this is Bill Dettelback from Red Hat. Well, are you planning to present on conveyor? No, no, I'm actually not with the conveyor team. I'm with the quay team, but we're just kind of visiting. We haven't been on us on a meeting for a while. We actually presented last summer. We're just kind of getting re-engaged with you guys again. So we're just here to listen. Okay, perfectly fine. I just wanted to ensure that people get a chance to present today. It actually picked a good day because we are pretty much packed, not just on, would never call them boring updates, but not just an update, but actual project presentations today, which is great. And given that, yeah, the agenda was pretty packed, I would propose we jump right in. If there are anything I can see want to discuss, Bill, obviously free to add them to the agenda. For presentations, we usually try to keep them to 15 to 20 minutes. So, okay, we have you last year and litmus chaos. So my proposal, because I already see them here, I would like to start with the litmus chaos project update and also the next step they want to apply for incubation. I think it's good to see what happened in this project and we can guess who's here from litmus chaos, like your background on chaos native here. So litmus chaos in the chaos engineering framework, but I'd like to directly pass over to them and give them the stage to give us a short update again try to keep people like 15 minutes roughly. All right. I just need to run through then. Then I really thought 20, but I'll try to make it into pin. Thank you. We can make it 22 but so we will survive if it's 20 that's fine. All right. Thank you very much. Hello everyone. Hi. So, basically, we are presenting here because we applied for moving to incubation last couple of meetings ago we did discuss here and and as part of that I'll provide a quick project snapshot update and also what we have done at the last nine months and where we are in terms of the project itself. So here is the PR. We applied this around November, keep on time frame and after that, we've been a bit busy, really working through the litmus 2.0 and also there was a big community when. So this is a good time to come back and present. And on the project snapshot update earlier, I mean maintainers remain same. Intuit continues to be active. Amazon is a doc maintainer and the primary sponsor is we were a team under Maya data and recently. We spun off the project on the team to start as chaos native with the goal to really focus on litmus. So that's good news from at least my time was divided a little bit on to CNC projects now it's litmus. Right. Apart from that, happy to state here that apart, we took the stats from DevStats since your best ads. There are a lot of good contributions that we received from other companies individuals from these companies and this account is really a combination of PR or PR reviews of created issues. There are many notable. Contributions from Intuit, in Central Red Hat, and Microsoft also has been a continuous help with a lot of testing they do on Azure right. And we also maintain a list of adopters formally we encourage whoever is using litmus to come and tell. I'll fill up this application. So far this is the list. Recently, we got a teleco reference as well orange has been pretty active in terms of using litmus for this for their chaos needs also presented recently in a conference. And an app Intuit was there earlier recently kept in Q player with scale orange octet over added. And there are many other large users of the project they are not yet. They are not declared themselves as adopters formally but there are public references they're very much active in the community and they talked about using litmus in various forms. Some of this are ready to take a reference if required by CNCF as they've been using in production and in other forms as well. So that's a quick update on who's using and in terms of the stats itself. One of the things that we continue to add is the new experiments. That's the purpose of creating a chaos hub, and we don't want to get the core team to write all the experiments and it's almost impossible right with the growth of kind of growth we're seeing everywhere. And so that's working well. And in the meantime, we have actually concentrated on building the project itself to have a super solid foundation for chaos at scale. Right. So we see the product project adoption as slowly increasing with more slack members joining and asking questions that's approved. There are totally new contributors of 70 plus that were added since the sandbox, and we also have defined many operational six but for our kindly, I mean, kind of working well, and the community meetups continue to happen driven by us. But what I primarily wanted to share is there are two other meetups that are started by the community members themselves so that's a sign of, you know, adoption in different geographies as well. Before I have one slide or two slides on litmus 2.0 of the work but these are kind of notable features. We did improve our CIS, E2E pipelines to deliver patches past and also monthly cadence we never missed a release so far. We almost are releasing a patch release every month apart from the main release. And we did make our architectural goal or design goal of delivering litmus portal. It is a huge step towards chaos for multi tenant and declarative chaos for cross cloud ecosystem. And also, we didn't want to stay at experiment level. We want to go to chaos scenarios we integrated with our workflow. There's a lot of feedback on you don't define what chaos studies that should be. I will define myself so we introduce probes, a lot of community work, and one of the other main things that we did is observability improvements. A lot of chaos can happen in different forms, but whoever is affected they should be able to know what happened when right, was this a problem because chaos was introduced or was it a natural thing so the context of chaos introduction should be captured in the monitoring system so we did define that. And a lot of namespace ownership issues came in and we did now litmus can run within a namespace and if developers are sharing a large cluster and they own only namespaces they can manage chaos within that. So this is like a kind of a quick overview and litmus six primarily encouraged by other projects like measuring and CNCF six itself. There are a lot of community call questions where I want to contribute here they're my interest is in observability so we define six within litmus. And right now the documentation is working very well. There were two contributors who are driving the documentation needs of litmus similarly deployability to contributors are coming and they manage the role sake. So that's also working well and we continue to do sick testing sick orchestration, etc, etc. As you can see that apart from chaos native team, the other members who are actually chairing this right so we want this sick to be chaired by someone other than chaos native team so that we get more natural feedback and roadmap reviews. And in terms of the contributing orgs we took it from Dev stat, apart from my data, which is chaos native now, you can see that into it and use telecom redact and some Microsoft itself, some of this are pretty active in contributing overall, even though we took top 18. But from sandbox, we've got 70 unique contributors. So that's good news to show that the project is actually being contributed to this is another stat that we wanted to present. As you can see that the yellow line is number of folks, there's a raise in October because of the hacktoberfest, other than that, get upstairs or generally linear, we keep getting a few stars in there. And the PR issues is actually going down. That's really means that we're working hard to close the issues. So there's a lot of work that we're doing to fix and we're closing many of them towards litmus. So these are the notable contributors given 15 minutes time. I'll just rush it through primarily redact and orange or contributing a lot to litmus groups, the definition of them, definition of steady state and how to introduce chaos on crowd platforms from Kubernetes itself. So that's some of the ideas that they brought in and the ideas to actual code conversion has happened and it is a feature already now. These are the integrators integrations one of the things that we believe will uptake of litmus is how well these are integrated into various CACD tools because generally before chaos comes in the teams are already using the tool. So, we started working with the workflow. It's not a tool, but you know, we wanted to use that to define chaos or close, but a spinnaker, captain, GitLab, GitHub, these are the four CACD tools and many are in the pipeline. We're just waiting for some community members or us to prioritize them. And Qvert is another integration done by Red Hat where they wanted to introduce for a non-Kubernetes target, Qvert VMs, so that went well. And we also have started contributing into the CNF performance test suite. And OptiTo is a developer cloud for Kubernetes, so they get it ready environment and also when they merge the code, they can run chaos with litmus, so that's a good use case to develop. On the community events, there were a lot of things that we did to make the project easy for users to begin, right? Primarily we did a lot of videos on YouTube, primarily targeting how to make it easy. And also interactive tutorials on Catacoda, we did orchestrate a good chaos panel discussion involving our own users, project users as well as users from the other community. And we did have a great event, which was primarily helped by Litmus team as volunteers. Chaos Cardiwell, apart from Chaos Native Team, six Litmus users came and presented how they're using Litmus in various forms. We did have bootcamps. We're also getting up towards the community bridge programs, GSorgs, Google Sundoc docs. GSorgs is something that we keep very high importance. And another good news is that the cross-project collaboration is happening primarily with Argova Close and Cap10, where we keep seeing how to do chaos with continuous delivery and that stuff. These are some of the slides I'll quickly run through from continuous solutions. These links are given here, the video links, the network chaos, how it was orchestrated by continuous solutions using Litmus. And this is how Red Hat is using probes, chaining of the probes to define steady state and do keyboard chaos. This was a talk kept in integration by Eorgan, which was an awesome, well-received one. They've been using Litmus in continuation. And also from IIG, Michael came and talked about how they use AWS, EKS chaos using Litmus at scale and for their general application and apprep because scaling of apprep because whether that will work or not when some chaos was there. And one of the main things that we added is the GitOps integration into chaos. This was one of the features that we've been trying to work, how can we scale chaos itself at larger systems. So we kind of have front-end GitOps, back-end GitOps, and we did demonstrate. It can work with any chaos, sorry, any GitOps tools like Partware CD or Flux and also Spinnaker. But we did demonstrate this and it is coming out in Torot 0 Beta in about a couple of weeks from now. And Opteto demonstration was good. These are, I'm just putting them here as a proof of it must users coming in and speaking at the public event. This is a telco reference where they have publicly demonstrated how they've used Litmus for their telco platform chaos needs. That's about various community updates and the progress that we made in the project. And we are certainly proud of the architecture built up in the last one year. And one of our goals before even we started announcing this Litmus is a project for public consumption back in 2018. We had kept certain goals for chaos engineering, right? We want to make them really fit into the cloud native goals. These are some of the goals that we defined. Actually speaking, GitOps was added last year because of the rise of GitOps. So there were four principles that I added two years ago in a CNC blog. So the principles are basically the chaos engineering should be open source, the experiment should be community collaborated for managing the lifecycle of chaos itself. You need to follow the operator pattern and CRDs and CRs and for scaling the chaos engineering and to follow the patterns of the scale. You need to work with the GitOps tools out there and for observing chaos, you need to have open observability. You don't want to get locked in into a particular observability system. So with this are the goals we work towards Litmus to rock zero. The last one year has been fantastic. So I would like to state here that we actually have achieved the future complete state for all of this. The first three are pretty much in usage GitOps and open observability we are about to release it's been tested in some form. But this year I think you know that's the learning that we are willing to take. Overall two dot zero really means Litmus has changed from being a tool for a single user to execute chaos experiments to a kind of a tool set for teams where who are operating across cloud environment to execute chaos workflows not chaos experiment chaos workflows and highly scalable cloud native environments. And why we state that because we did a lot of work as I mentioned workflows and there's chaos portal GitOps integration chaos analytics observability study state definition through probes VM chaos and in space chaos. So with all this together Litmus is more formidable to be used by a larger enterprises which are already in use I would say. But this is again a repetition of that experiments we believe will become a commodity it's more of a chaos scenarios. And it is more from per user to teams per cluster to multi tenant cross cloud system for organization you need to manage Litmus at organization level, just like you do GitOps, a single source of truth where you keep all the configuration in one place for all the teams together, you can manage chaos in the same way. You can manage chaos experiments of course I would say earlier we used to have all the experiments are put in one public hub, but now teams can have their own private hubs because they develop their own experiments they want to manage within the organization and some they can upstream they tune or write and then they keep it Litmus will work very well in terms of orchestration with private chaos hub earlier it was CLI only mainly for the observability purpose and ease of use purpose we brought in as well and GitOps, which will inherently help with scalability and management, and then at the primary new feature is really the probes, it has got a lot of attention from the community where they will be able to declaratively define what they think is a steady state of the system before introduction of chaos and hypothesis definition it differs from each application each team using the same application so they are completely at their flexibility levels to go and define the chaos steady state sorry. So what is ahead for us Litmus we are willing to write more planning to write more documentation socialize with GitOps tools and we are trying to get more experiments which are application specifics to be contributed to to the chaos hub, while we continue to solidify or make the foundation stronger. And in the short term, getting to Rod Zero out getting it used listening to the community is our goal, but in the midtown next two quarters, we want to get more chaos types like GRPC, the IO chaos, and we also wanted to introduce the Rust library for SDK, there have been some interest from community. So we'll be working on that. And other than that, we would like to work with as many tools as possible in the CNCF ecosystem, and also with other CNCF projects, right so we'll see depending on the bandwidth. So with that, I would like to take some questions. And again, thank you for giving us time here to present. Yeah, thanks. That's a great update I remember the very first presentation you made. Thank you for a year ago. So, on 1.0 versus 2.0. So 2.0 is already released and how many of the users would you advise? 2.0, I mean around three months ago, there is a master branch that we created, right so 2.0 users, there are multiple large users who are using portal already, I would say about 5 to 10% users are using. And we're going to announce 2.0 beta March 15, and then couple of months from then it will go into this one. So this is basically the entire community is used to a certain way of using a litmus, right, and we don't want to just change them. It is how you go use it, but rather just do a slow transition and then move in. And maybe especially for your incubation proposal, because we had a similar discussion also with with the flux team or like also between flux one and flux two, which I think is actually pretty natural that software evolves. And especially in these scenarios, just be clear on what the transition looks like and especially when that you see wants to talk to users think it's also good if you have some that might already be using 2.0. Yeah, and like some of these things off going parallel like a project maturing to like your 2.0 release and the incubation proposal, but from a validation perspective it's then kind of hard because you talk to the 1.01 users only and you have to see how the people are migrating over just for you as some input for the for the incubation. And some users who are on litmus 2.0 itself. I'm not sure I can state the name here, I can definitely ask them and then, you know, they've been a big couple of big users are big proponents, they draw a lot of these requirements. So we can definitely get them to speak to to you, or whoever is doing the DD. Yeah. And if there's some incompatibilities between 1.0 or 2.0, just keep in mind for the incubation proposal to just either if there are not, which is great, or if there are what the transition path for the user looks like I mean everything makes it totally compatible. Yeah. Yes. Yeah, then just put in there okay that's not the big. Sure. So what exactly do you mean by by GitOps I mean get it. I'm serious also here and other people in the GitOps space. So what's the GitOps part of the chaos experiments do you react on change in the system or is this just managing the experiments via CRDs and then automatically deploying them. It's worth the maybe Karthik can add a little bit more tech details Karthik. Sure. So the GitOps integration allows is more about reacting to changes in application on the cluster. So we have an event tracker on the cluster that can basically pick up changes made to an application. So the application changes itself can be forced to buy any GitOps tolling like flux or Arco CD, and that can be a cause for a new experiment to be triggered on that application. It's a way of verifying if a new application, a change in the application actually is good for it is it resilient for the system so we can verify that with an experiment that we can trigger. So that's one part of the GitOps story. And the other part is the chaos experiments themselves, or the workflows that were mentioned, can be stored in gate and synced on this portal. So you always ensure you have a golden copy of your experiments on gate and whenever you make changes there that is available for you to consume immediately. So that's what he alluded to when he said front and back in GitOps. I hope that answers. I think it's, it also helps obviously for release validation things that we do as well. I think one to open it up for two Arab people before we jump to the next presentation, already asked a lot here. Just a comment. Just a comment. I just, as always, very impressive. What you have done both with litmus and the way that you have explained where you're at and what you're doing with the incubation proposal so thank you. Great presentation. Thank you, Cornelia. It means good. Thank you. Okay, then I would pass over to cube. Plus. Thanks. Let me share my screen. Hey everyone. I'm the data Kulkarni and I'll be presenting. There are two. Q plus so let me. Okay, there you go. The zoom. Okay, so I'm founder of cloud arc and at cloud are we have built or have been building q plus to solve multi tenant application stacks problem of how to create multi tenant application stacks on Kubernetes. So at cloud are we have been working with startups and enterprises, who are essentially looking for help to set up their communities clusters, and by that what I mean is the platform engineering teams are looking help, looking for help to really make sure that their cluster is usable across different workloads. For example, one startup that we are working with. They want to host MongoDB as a service on Kubernetes and they want to create these MongoDB stacks per tenant. Similarly, there is another startup who is working with moodle and want to create moodle stacks per tenant. Another enterprise is building a browser as a service on top of Kubernetes, and across all these, all these teams what we have seen is the main requirement is that they have a hand chart of their application package. And from that hand chart they want to now create multiple instances for tenant. And for example in this slide what I'm showing is the WordPress hand chart and let's say for different tenants I want to create different stacks for tenant of WordPress. By the way, this is the example I'll be using throughout the demo as a running example through this presentation. So, when working with these teams what we have seen is the challenges that platform engineering teams typically face is how do you really isolate the various resources across different tenants. Meaning that for example in the case of moodle as a service what that startup wants to do is they want to run a moodle stack for one tenant on one worker node versus for another tenant they want to deploy it segregated on a different worker node. Or for the team that is building browser as a service they want to be able to really track and monitor the CPU memory storage network consumption for that browser instance for each tenant separately. And the predominant way that this is typically done today is through some convention like labels when the platform team and their consumers will agree upon certain labels and then the the ask will be that in these hand charts or wherever is deploying the application. They make sure that the right labels are used or right labels are defined, but that is not such a straightforward problem to really check if the hand charts are including the right labels or even if you can apply the labels on every resource that creates gets created as part of the because custom resources and hand chart can create custom resources include custom resources and there is no way to know today what all sub resources are getting created by that operator who is managing that custom resource. So essentially, what it comes down to the problem is really the platform engineering teams today face is how do you really apply define and enforce tenant level policies, for example, deploying separate stacks on separate nodes, how do you track consumption and how do you track the metrics for CPU memory storage network, and how do you just visualize the tenant level resource topologies which is the graph of all the resources that are created as part of a particular tenants stack. So addressing this problem through Q plus and our basic idea is, let's wrap an API around hand chart, and this API will basically provide a control point for for the platform engineering teams to really define and enforce these kind of policies, and it will also provide them a way to expose only those things that they want to expose to the to the end users so exchanging him charts between let's say if there is a platform team who has maintaining a home chart they want don't want to probably give it out to the product or user teams to create these stacks so they want to keep that fun. So that is also possible by defining an API. So that's the crux of q plus where you give it input as a hand chart and then define policies and monitoring and then q plus will generate a API for you and then you can create instances of that API to create actually the stack for every tenant. So q plus is an open source framework to design multi tenant platform services declaratively it consists of two components. There is a one component which we call as CRD for CRD, which is essentially a top level CRD called resource composition, using which you can create whatever CRD that you need for a particular platform service so for example in this picture we have a resource composition used to create WordPress as a service CRD MongoDB service CRD and so on. And then from those you can actually instantiate an instance of that CRD to create an application stack. So that is one part of q plus the other part is a set of two critical plugins which allow you to visualize the runtime graph of all the resources that are created as part of an application stack. So the main components of q plus CRD for CRDs are, as I said earlier there is the resource composition as the custom resource, the top level custom resource, and then we also have resource policy and resource monitor custom resources, and the main work that is done by q plus is done through a mutating webhook in a custom controller and there is another module which actually does the work of deploying Helm charts. We use and we depend on Helm charts being 3.0. So that is every Helm chart needs to be packaged that way using Helm 3.0. So here's the demo scenario for today so WordPress as a service the Helm chart that we have built it is a simple WordPress pod with a service in grace persistent volume and so on. And for its database needs we are using the press labs MySQL operator, which actually provides a MySQL cluster as the custom resource. So a pod and a custom resource, and so that the assumption that we are going to work with is the operator is already deployed. So that will be something typically done by platform engineering teams they will work and deploy the operators ahead of time. So we start there, and we have for the demo we have two worker nodes. It's all on GKE. So the first step that we and by the way I have screenshots just because it's easier, but this entire demo is also available on GitHub, and I have pointers to it towards the end. So the first step that we start out with is define the resource composition CRD, which in this case is going to take the URL of Helm chart as input, and this is the Helm chart that packages the WordPress pod and MySQL cluster. In addition, there is a policy definition, which is, we are on the left side I've shown the policy definition where what policy we are going to apply is there are two policies so the first part is for every part that gets deployed as part of this Helm chart we want to define the resource request and limits and I have big CPU and memory, some arbitrary values to just showcase for the demo. So we are going to specify that as part of policy, and then the second part that we want to specify is on what node a particular pod needs to be deployed and if you see here, we have defined node selector as values dot node name so essentially that allows us to customize the inputs that we receive for every tenant for different nodes that can be deployed on different nodes. So with this as input you give this to q plus and what q plus generates, and you define the name whatever name that you want to define. So this WordPress service is the name that we are going to define and q plus will register the Helm chart and will register this new CRD in your cluster. So that will be the first step. Now, once that is done. Second step is, you are going to create instances of this WordPress service so for tenant one will create one instance for tenant two you will create another instance. So this is just showing that WordPress service instance, and the spec properties of this instance are essentially going to come from the values dot ML of that Helm chart. So whatever the underlying chart that you have built its values dot ML will be reflected as properties of this custom resource. And that's why we see here the namespace tenant name. In this case, these three node name tenant name and namespace are going to be inputs with the platform team will have to specify so for different tenants they can pick different node names. So this is the second step. Once you create this, what you get is the WordPress stack, which I'm now using the q plus is second component which is the q cartel connections plugin to visualize this entire resource draft. What you see here is the WordPress service WP service tenant one was the instance that we created, and through behind the scene what you plus did was been the instance of WordPress service was created it actually created all these resources for the MySQL cluster behind the actual operator got involved and it did its right thing. And if you notice that the MySQL cluster tenant one has so many other resources that it is creating. So q plus is able to discover all of these at runtime by tracking for different relationships that exist between Kubernetes resources so owner references spec properties labels and we have. I forget the fourth one. Yeah, but all the four properties that we typically have for Kubernetes resources we are able to track books. What we see here is these top level resources for when it is of WordPress service are part of the henchard so there is a secret. There is a service in grace persistent volume and the pod so these are part of the WordPress and then my SQL cluster is the is the other resource part of the henchard. Okay, if you see there are only two parts that are part of this entire graph, and we are able to discover these graphs and once we have these parts through these graphs we are able to verify policies so this is this slide is showing that if you look at the pod for my SQL and check its resources then the limits and resource request limits and CPU and memory request and limits they are what we are specified as part of the policy input. And similarly, if you look at the node name, these were the node names that were specified when this particular tenant instance was created. So what we do is as part of q plus the mutating web hook is able to catch all the pods that are getting deployed through that henchard and then we modify the spec properties of the pod. Before the pods get deployed so that way there is no restart of the parts and before the parts are even deployed right policies are embedded into the pod spec. And because we are able to track the parts then we are able to collect the metrics as well. So here, just as an output of another q plus q kettle plugin which is q kettle metrics which allows us to collect CPU memory storage network bytes received and network bytes transferred metrics for that particular instance so in this case for this particular tenant we are able to just collect all these metrics and then these can be exported in Prometheus format as well and can be seen in any any tool that supports Prometheus format. So now, in order to design this approach, one thing that we really focused on was how to create these APIs to wrap the Helm charts and the approach that we chose was to build this top level or meta API or CRD for CRD to create these platform services. The reason we went this route is because it just makes it easy to provide a declarative way to define these APIs with the monitoring and policy inputs. And also the other approach or the other reason is by having a resource composition as a top level single CRD it allows us to have a single operator so the custom controller that q plus consists contains for resource composition that is a single custom controller running in the cluster, and is able to generate any APIs and is able to handle any APIs that you would create as against this something like Helm operator would actually create a new operator for every Helm chart so just from the point of view of footprint of this operational control plane, having a single operator just is better. So those are the reasons why we went with this approach. And yeah, I mean, I just wanted to come here and present to this community. We are seeking inputs into additional new features we have certain things planned as part of our roadmap but also looking for adopters and just general inputs from the community. Thank you. Any questions, thoughts, I'll be happy to answer them. I think this kind of reminds me a bit what I think the operator framework does with their help based operators to some extent, they can also take my home charts package it and they get a new CRD for for the values file. Correct. So, this is the point that I was referring to where we've seen Helm operator. The advantage of using a CRD for CRD is, you don't need to create a separate operator for every Helm chart and by my understanding of Helm operator is to start with the operator SDK and it will actually generate a new operator for a Helm chart. And then you will have to instantiate that operator in your cluster so essentially let's say if you were to consider something like this situation where you want to have an API for WordPress API for an API or browser as a service in as part of the same cluster then you would end up actually instantiating three different operators, whereas with their own control planes. So our approach is different than that where we are actually allowing you to declaratively create these APIs and the input that you provide to them is a link to the Helm chart and that gets registered as part of creating the new CRD. And then the consumers actually consume create instances of that new API which got created. It actually creates, it does create a CRD it's just that the user, so it creates a CRD for each Helm chart, but the user isn't responsible for creating that is that is, I'm not quite sure on that. No, you are correct. So the platform engineering team is actually it will declaratively define a new CRD, let's say like use the resource composition to define WordPress as the custom API. And as part of that it will define the Helm chart so platform engineering team will be only responsible for managing the Helm charts and they can use them from community doesn't matter and then Q plus will actually install the new CRD and then it's able to also react to the events for those new API type so for example WordPress service is a new type that or new kind that gets registered right or MongoDB service will be a new kind that gets registered. So, Q plus has the machinery to actually react to these kind of new new kinds or new types that get added at runtime by platform engineering teams. Okay. Does that answer the question. I think so I think that the way that I understand it now is that the CRDs are in fact created it's that you're shifting the responsibility from the platform team to be able to have to manage a host of CRDs using something like the operator framework or whatever whatever way they decide to create the CRDs or system it takes so is responsible for that management. Correct so Q plus takes over that part of actually instantiating the CRD and then providing hooks to define any policies that you want for to apply on that entire Helm chart so right now we are focusing on policies which are more the mutation based policies because a lot of these things need to happen at the pod level I mean our entire this approach stem from the point that operators and custom resources they create a lot of resources behind the scene right like my SQL cluster will end up creating two three parts and those parts are not visible outside in the sense the only abstraction that will be available outside is a my SQL cluster instance so how do you really control what gets specified in a pod and it's possible that the operator developer might not have exposed all the pod specs at the custom resource level so from there this entire approach actually stem where we want to be able to mutate what we want to be able to provide platform engineering teams a way to define mutations at the pod level and because an application is packeted as Helm chart predominantly Helm chart is the packaging format that is used that we have seen we focus on that to take that as an input and then it just made sense to wrap an API around that but then how do you really create that API because as a platform engineering team you might be working with many different Helm charts you know and there's no way you can actually create a separate operator for each of that Helm chart management and so that's why this Uber CRD actually sort of entered our thinking that okay let's have a resource composition as the top level CRD which allows the platform engineering team to define new CRDs and just take away the responsibility of them having to instantiate and create new CRDs. Can you talk a little bit about the boundary of kind of resource management with your mutating webhook things like how much of it is in the chart and how much of it you expect to manage things like replica sets, horizontal pod autoscaler or vertical and also all the policies associated with that. Yeah, absolutely. So that's a great question. So to be honest we are looking for inputs and right now the policies that we have defined are stemming from the work that we have done with our early adopters. So the things that we've seen people ask for is ability to control the CPU memory and just collocation of pods onto certain nodes, but you are right there are the one thing that is coming up next is how do you really attribute network traffic to only the tenant specific network traffic and more specifically traffic that is outward facing which is by outward facing what I mean is let's say in this particular application WordPress pod is the one which is a user facing right internally uses MySQL pod but any network traffic that is that if you want to count it has to go counted for the WordPress pod to be accurate. So with our connections we will be able to track those like which part is the only outward facing part and that also applies to the policies where the policies around what like HPA related things. If ideally we would like to provide controls for everything that can be mutated at the pod level the pod specification allows one to actually define a lot of things right and it's possible that certain things are already defined in Helm chart so then we do support override so you can say as a platform engineer you can say that I want to override what it's already in the Helm chart but if you don't want to override the flag will tell you that but right now the policies that we support today are the resource limit node selectors and then the next one that is coming up is for affinity and affinity pod disruption budget so there are a set of things that we are planning to do and we would love additional inputs on what new things can be supported. What I also can recommend is likely we have this little demo project here with potato has chosen as an example available for potato has a people can simply try it out and play around with it. Yeah, I would do now so that's. Okay, so yeah, I mean, we'll share this these links on the slack channel. So if folks can provide try it out and provide inputs that will be really great. And yeah but thanks for the opportunity to come here and present. And if any questions come up we are also available on slag it's all open source. So, feel free to try it out. Okay, that was a pretty packed meeting today so we're almost on top of the hour. Anything new. That was good to new project presentation so I see that we're getting more momentum here. So one, we will obviously move to the next session. Where we try to people. Yeah thanks for both presenters today obviously with most guys. We will follow up on next step here and also thanks for the new plus presentation it's always great to see new projects that you haven't seen before and obviously multi tenant deployment and more decorative ways on top of beneath this and moving more into this platform idea are always great ways to look at. All right then I would actually call it a meeting for today. Thank you everyone for participating and see you again in weeks from now thanks. Thanks everyone. Thank you.