 Well welcome everybody, a slight delay in the soundcheck but welcome to this OpenShift Commons briefing. Today we're gonna get started pretty quick here. The other day CMAC, who's a product manager who I've been working with for quite a few, couple of years now, did a two-minute talk on Helm 3 in OpenShift and I decided that I needed more. So he's going to give us a deep dive on what's next in Helm 3 on OpenShift and I'm gonna let him introduce himself. There'll be live Q&A. You can either type into the Twitch channel or in Blue Genes if that's where you're listening from and we will relay the questions or we'll open up the mic and let you ask the questions yourself. So CMAC, take it away and let's hear more about Helm. Thanks Sian. I'll introduce myself briefly and CMAC said again for product manager Red Hat part of the OpenShift team. My focus is the developer related tools and flows, Helm being one of them CICD and other areas related to that and we have started supporting Helm on Helm 3 on OpenShift. So I'm gonna talk to you about what Helm 3 brings to the table and what we're doing on OpenShift about it hoping to show you a little bit of that in action also toward the end of the session on an OpenShift cluster that I have here accessible for the session. So like very briefly let me talk about what Helm is. I think most people within the Kubernetes sphere know about Helm but to make sure it's level the knowledge, Helm is a package manager for Kubernetes apps. So you can describe your application, they manifest the Kubernetes manifest that your application requires as a Helm package and then parameterize them and install and update these applications on any Kubernetes that you install. It's a very popular technology. It has been around for quite a while. It has become the way to install apps really on Kubernetes. There have been multiple tries on how to define applications and how to parameterize these manifest and Helm is really the most popular one that has emerged as among these solutions even though that other ones are like JSON and other solutions around are also still popular but Helm has emerged really as the most widely used among these ways of defining applications. There are a couple of concepts in Helm. Chart is referring to the package that I was mentioning that when you describe your application, those manifest, you put them in a package that is called chart and these YAML files you generally parameterize them so that you can configure your application differently in different environments that you deploy. Usually within your development environment you would use a particular instance of a database and in your staging a product would be a different one so your for that database could be something that your parameterize as an environment variable in the deployment object and that deployment object is one of those YAML manifest that is included in the chart. You have to put these charts somewhere as well like when you want to distribute them if my application is built as a chart and I want to deploy give it to the IT ops to deploy it in the production environment, staging environment. I have to somehow share this with them. Repository is a place where these charts are stored and they can be shared and distributed and release is the last concept we're going to talk about is when you deploy a chart, install the chart into cluster, you will have a release of that chart. If you modify that deployment and do an upgrade of your application for example, we have another release so every deployment of a chart into a cluster that becomes a release which we will look at a little bit later as well. But how does HILM work? And this is where the main difference really comes from HILM 2 and 3. A little bit of history here that HILM 2 was not supported in OpenShift. We generally had a like non-production view or we did not recommend customers to use a HILM 2 for in production environments. And the reason for that is that HILM 2 relied on a component called Tiller that runs on the cluster as cluster admin. So you as a user that has limited access to the cluster, I'm a developer. I have only access to the namespaces that my application is deployed. I could ask Tiller to deploy a chart into the cluster. I as the developer have limited access. Tiller is cluster admin so I could have used this access that Tiller has and ask it to install things that I personally don't have access to install. So this became a conflict of it's become a problem for production environments, right? Because you want to close the production environment then you want to control what gets installed on it. But using Tiller, using HILM 2 would have opened the door to anyone installing things that they are not supposed to do on the cluster. And I'm trying to simplify the issue that probably did the security issue with Tiller. But that more was captures the problem that existed. So we generally did not recommend customers to use HILM 2 in production clusters. And a lot of customers, a lot of users of HILM 2 still use HILM as a template engine or within like non-production environments. But that has been really the main reason that we did not bring HILM 2 as a supported components into OpenShift even though customers could still use HILM 2 on OpenShift. But we were not endorsing it. When it comes to HILM 3, HILM 3 has a different mechanism for deploying charts and the Tiller component is gone. So what it really happens, as you can see on this slide, is that you have the HILM chart, which is those manifest that I described. The manifest for application, this is the deployment YAML, this is your service YAML, the ingress object, the config map. And you have certain values, which are the actual values that you want to replace in this parameterized manifest, right? You are allowed the database or you want to replace the image tag that needs to be deployed. So these two gets combined. The result of that is a set of Kubernetes manifest that can get deployed into the OpenShift, into the Kubernetes or OpenShift clusters. And a release object is created. And in HILM 3, the only component required to do this for you is the HILM CLI. All this is happening on the client side. Nothing is running on the cluster to do this for you. In the HILM 2, there was Tiller running on the cluster. So on OpenShift, there would have been a Tiller component that runs as cluster admin. You pass the charts to it. You pass your values to it. Tiller combined these two, generates the manifest and deploys them into the name space. And hence the security issue on HILM 3, you don't have that anymore on using the HILM CLI. And as a developer, when I use the HILM CLI, so my security context is used, I only can deploy components that I have access to if the chart includes a resource that I'm not allowed to create, then the resulting manifest that is applied to Kubernetes or OpenShift, I would immediately get an error, right? Because the cluster doesn't really care that this manifest is produced by HILM. It sees it as any other Kubernetes manifest. And the RBAC model, the security model of Kubernetes would immediately complain about it. So that's the huge difference between HILM 2 and HILM 3. And this has been a very positive change. Many, many customers of any users in the HILM community had asked for this, would remove a lot of Tiller. And this happened on version 3, which we will come as well and has been addressing the issue that we had for the longest time, which led us to enable us to bring HILM 2.0.3 to our customers on OpenShift as well. On the slide, you see also an image repository update, because all HILM does is that it converts those templates and values the configs into a set of Kubernetes manifest. But the actual application is the images that you need, and they still have to get pulled from an image repository like kuei.io or factory or Docker pop or somewhere else. So they get pulled by those manifest like any other Kubernetes manifest, right? There's no difference really. The release object, why do we have those in HILM? They contain the metadata about what happens, right? So in the release, you would know that it was this particular version of this chart that was deployed at this time. And these are all the configuration that were applied at the time. So after you deploy a chart, I still can manually use kube-cuddle or ocgo modify these objects or edit the channels, reapply them outside of HILM context, right? So a release object in HILM contains what those manifest look like at the point that they're deployed so that this information can be used during the upgrade, for example, that the live object has changed from what included a release. So to summarize, release contains automated data about at that point that a chart was installed into the namespace on a cluster. So what else has changed between HILM 2 and HILM 3? So we talked about tiller, that's still the largest piece of that. It's a very positive change. Everything on HILM 3 is on the client side. All you need is the HILM CLI and you're good to go. The second change is that it comes together with the removal of tiller is that that that release metadata that was mentioned. This is managed as secrets within the namespace incubator. So before in HILM 2, tiller was the central component that was aware of all the releases. So tiller included that information as well, what releases are have happened on this cluster and chart. And since this was a central component, you had to have unique release names across entire cluster, which was a little bit difficult. On HILM 3, releases are just a secret. So all that metadata is encoded as base 64 and put it into a secret into the namespace that you have to deploy the chart. So again, you don't rely on any central components. Everything you do is on client side. And the HILM 3 uses standard Kubernetes objects for storing the metadata about their releases as well. So you can, if you're curious, you can just use kubectl and get the look at the secrets within your namespace after you deploy the chart and decode it to see all that information related to the actual release. It's a very long JSON file. I think like during the demo, I can show you that as well. And the third point, what else is new in HILM 3? It's an introduction of library charts on HILM 2. There was a need to share certain manifest or like logic between multiple HILM charts. So this was done in HILM 2 as well, but in HILM 3 it is recognized as a library chart. So it's a first class citizen. A library chart is a type of chart that does not deploy anything. It's only purposes for other charts to include it and as a dependency. And it shares for sharing common functionality between multiple charts that are related to each other. Three-way strategy merge is another thing introduced in HILM 3. It's very helpful, especially with appearance of projects like Istio or if you're using Walt for injecting secrets into your namespace for pods. In HILM 2, when you, so a release, like we talked about, it captures the metadata when the chart was deployed into the cluster. Let's say a month later, a couple of weeks later, you have a new version of your application, a new image for your application, and the configuration has changed also slightly. So you get a new version of your chart and you want to upgrade your application through HILM to the new version of the chart. In HILM 2, the way upgrade happened is that it looked at the release metadata to see what kind of manifest were generated for application. And it also looked at the new version of the chart to compare these two and adjust that your live application to the new manifest that are included in your chart. This all looks good, except that using something like Istio, the manifest that are live on the cluster slightly differ from what is included in the release. So imagine that you have set auto injection on your namespace and you're using Istio. So for every deployment that you have in the namespace or for that particular application, you have a site card that is injected into deployment into the pod by Istio. The chart obviously does not know about it. So there is no trace about, no traces, no information about the site card, the Istio proxies in the chart manifest in the release. So you have a release that doesn't know anything about a site card and you have a new version of your chart, you deploy, you want to upgrade to it. The new version also doesn't know anything about site card. So what usually happens during this upgrade is that those changes that Istio had done on your pod, they all get overridden and get removed when you do the upgrade. So it creates a little bit of a clash there. And there are other projects that have the same pattern, like if your vault dimension does the exact same thing, you're injecting secrets. So the vault control also would inject a site card into each pod. And a helm upgrade would override, it would remove the site card basically from your application and would create issues. In helm 3, this is addressed by a 3-way strategy merge. So what a 3-way mean? That every time you want to upgrade a release in helm, helm looks at the release metadata that you have, what manifests were generated for the application that you expect to be on the cluster. It also looks at the new version of the chart, what manifests you want to go to, but it also has a third pillar, that's why it's called 3-way, it looks at the live state of the application, the live state of the objects in the cluster and measures all this together instead of ignoring what's live on the cluster and only look at the release and the new chart for creating a new manifest. So the 3-way strategy merge makes sure that if there are live changes on the cluster, if Istio has injected site card or something else has made changes to those objects, they get preserved when you're an upgrade, it doesn't overwrite those ones. Next change, this is, we will look at a little bit more in detail about it also, it's a Kubernetes security model, so helm 3, everything is on client side, you are just generating a set of manifests that get applied on the cluster, so every object that you create that follows the Kubernetes security model. If you are creating an object that your user account is not allowed to, you would get an issue there because Kubernetes would just not allow creation of that object, so let's say you are not a cluster admin but you are creating a CRD or a cluster role, Kubernetes will return an error that you don't have access to create that even though that is included in the chart and that was possible before with Tiller because Tiller had access to everything, in helm 3 you have access, helm has access to whatever you have as a user, you can't go beyond your security context. CRD installations are simplified in helm 3, they are more recognized and the ordering is simpler to control and the CRDs get installed before the rest of the manifest that you have. Helmhub is a new piece, it actually has been around for a while now but a Helmhub is launched as a central catalog for helm charts, there used to be or there is still a central repo for all the helm charts with categorization of what is stable, what is in the incubator and so on but more and more helm repos have appeared, there are like Bitnami helm repository, helm chart repository, there are a lot of other ones that every application has possibly their own chart in a separate repo, so Helmhub was launched as a central place catalog that indexes chart across all those helm repos that exist and makes it really easy to find charts, you don't have to go check these repositories one by one or expect everything to be existing in the central helm chart repo that has been used for a long time. So you would go to Helmhub and that would it's easiest to search there and it will send you to other repo. Helmhub does not post the actual charts, it's just indexing from the other existing chart repositories out there and even in Helm CLI that default central repo for charts is removed, there is no preference there, you have to manually go add the chart repositories that you want to use and install the charts and pull charts from those repos. There is integration also to Helmhub so through the CLI can obviously search in Helmhub if you're looking for work for us it would tell you which repos contain a WordPress chart for example. And the last bit and the piece that we are really excited about at Red Hat is the OCI registries as Helm chart repositories. This is an experimental piece, the idea is to use OCI artifacts for storing charts as well because chart repos right now are it's a plain like index YAML basically that lists all the charts that are available in this repo and all the metadata around it but there is no security model, there is no clear way how you add charts to this, you have to modify it basically modify that YAML at the chart and it's not easy for a developer to be able to push charts through it or interrogate the repository using an OCI registry that would piggyback on the OCI registry capability around the security model, the push and pull model and Helm charts would be you would work with Helm chart the same way that you would do with any other OCI image basically so make it really simple to have a security model around pushing charts through the repo or pulling them and categorize them and verging and so on and they have just recently added support for Helm charts as also an artifact in Quay as well is an area that is that is emerging it's not final because the spec is not final yet but we're working on it and we're looking forward to use the OCI registry really instead of the existing model or for repositories so what does this mean for people they're the large group of users out there that have existing charts for Helm too how do you go to Helm 3? The statement there is that Helm 2 charts mostly work with Helm 3 there there is no reason for it's compatible with Helm 3 they wouldn't work but they're the only thing that you have to be careful is the security model so in Helm 2 everything gets installed because the tiller had access to the cluster in Helm 3 a lot of those objects might not be able to get created especially on OpenShift that we by default shift the platform as very secure and the developer like a normal user would not have access to change the cluster like make cluster decisions would not have would not be cluster admin so you might run into some of those issues that you have to review your charts to make sure that the chart is not doing more than it's expected for the person that is supposed to install it so if the chart is created for the developer to install it then have to make sure it's not creating objects that a developer doesn't have access to that's the only point there are a couple of ways that the migration can happen you can run Helm 2 and 3 side by side because Helm 2 is still using tiller and every new chart that you deploy can use Helm 3 and use secrets for the releases for the within namespace so that would that would work there is also a plugin that migrates so you can like if you there's a window that you would not make any changes to the Helm releases that you have on the cluster there is a plugin that migrates the metadata that exists in Helm 2 and extract from tiller and create it as Helm 3 releases within your namespaces so that that's also available for for you what we are seeing as a very common pattern is to have that co-existent and gradually move deployments to Helm 3 and remove the Helm 2 releases to a point that there is no releases based on Helm 2 anymore. So I mentioned that the security model is different in Helm 3 and in we have to review them if you're bringing charts from Helm 2 to Helm 3 this is like a brief comparison of how how the security model is different so in Helm 2 you have the chart signing provenance obviously that to make sure that the chart is coming from the the author that is claiming it's coming and then you have certificate management around tiller to make sure that tiller is not compromised and within tiller you had access management or you could control who has access to what and that was it so untiler itself is had cluster admin to the cluster so if you pass tiller everything was allowed really to do on the cluster everything you ask it it could do if tiller was compromised moving to Helm 3 there is no tiller obviously and chart signing is still in place so you have that provenance control and the rest of it is all Kubernetes security model right so you have Kubernetes RBAC if that's enabled we have pod security policy network policy the certificate management user management service accounts and so on everything that you do basically regarding your manifest the your application manifest on Kubernetes your user access applies now to Helm 3 as well and that's really when we talk about existing charts on Helm 2 that's really the area that requires more reviewing right otherwise the manifest are all fine but a lot of charts going to access problem when you install them through Helm 3 because suddenly you are limited to what your user has access to you're not cluster admin anymore on the cluster or if you're a cluster admin you wouldn't see any difference really any chart that you deploy with Helm 2 you can deploy with Helm 3 as well all right so let me show you a little bit what we are doing I had some screenshots here to show you in Helm in OpenShift console but I think it's much nicer to see it live so as a part of the support on of Helm 3 on OpenShift we are also surfacing in Helm more and more within the OpenShift console especially within the developer flow so in OpenShift 4 what you can see on my screen there are two perspectives administrator and developer and the developer perspective focuses on obviously developer workflows what they have started with is to add at Helm charge within the developer catalog in OpenShift so if I go to the add flows and choose a Helm chart within the developer catalog which is the place which is the menu the self-service place for developers as a developer if I look for a piece of software I would always come to the developer catalog to find and deploy them with me and remove the type to Helm charge you can see everything maybe I'm looking for a .bed application or I want to deploy a Java application I can see different type of Java on times or OpenJDK and Tomcat and so on to find it here so this is a self-service so any developer wants to deploy content on OpenShift they will come to the developer catalog and Helm chart is recognized there as a new type of content so right now there you can see a few charts that are available and you can add more charge this is backed by a Helm repo so any chart in the repo will get pulled and we display it here we are working toward repository management really for developer catalog as well so you will see the charts here a click on a chart little metadata this is an example chart so it's maybe a little empty link to the home page if I click install so we talked about chart having all the manifest and we can configure it this is the values YAML really so I can add information and modify the behavior of this chart for deploying within my name space and I click on install so that would take that chart downloaded from repo and install it on within the demo name space on my cluster within topology we can see this like deploying right now the manifest and going to bring up the pot the Helm charts are recognized within topology as well let me zoom a little bit the nice thing about is that when I click on it it would give me a little metadata but also it shows me the list of resources that are associated with this particular Helm release right so it's a Helm release it tells me that this Helm release that was created it included a deployment config manifest a build config service image stream browse and so on and release notes if a chart particular chart has released notes this chart doesn't have any release notes release notes usually contains like if you're provisioning a database it would show that the username here or password or credential or any other config objects that you want to communicate to the developer that has deployed the chart and if I go to the Helm section of navigation I can see the charts are the releases are listed there so there is one release called Node.js.xk I didn't rename it it was a default my default name generated for me and this is at our first revision version of the chart and application and these are the releases this this information was it used to be within tiller and now they're managed as secrets within within name space so if I look at the secrets within this name space you can see that there is a type of secret called Helm release we want look at the YAML it's a long encoded data here it's large piece of information that is encoded in base 64 and that's the release if you decode this in base 64 twice because this is a secret you would see a large JSON that includes all the manifest and so on you normally don't do this I'm just showing you to like how how this is different from Helm too but that's really how how releases are managed and these are like regular Kubernetes object right so there is no central piece that is managing this and if I use Helm CLI I would get the same information about the release let's take a look see if the chart is deployed now there we go it had a build this particular chart had a build config to to build the image from source from me showing the logs of the build let's go back to the topology view it's zoomed out there we go and it is now deployed a part is running so let's do something else let's go to the to the release and I want to upgrade this release to a new version of the chart I actually don't don't have a new version of chart right now it's only the version that we had but what we can do is to modify some some information about this this this chart and redeploy that so upgrade is not only upgrading to a new version of the chart you might just upgrade the configuration values of that particular release so I'm going to upgrade this to the pull policy be always always download the nginx image and upgrade that so when you when I do upgrade it reads a revision of that release and it would redeploy those the images that I have had with the new new pull policies if I click on the release that I had now within the revision history I can see that a few minutes ago less than a minute ago I operated this particular release with with modified modified values and in the resources tower this I can see all the objects that has been generated by this release so this is very handy if you want to see everything that is related to application to this release when I deploy an OJS application I see everything that is generated and from there I can navigate to those particular objects for the release go back to the revision history after I made a change I changed my mind I don't want the image to be pulled every time we can from right here obviously like roll back to a particular revision that was deployed before so I go back to the first revision of this release ask for confirmation so I rolled it back to the first release so this is nothing new for if you are familiar with with helm and this is the release upgrade rollback on install capability I'm just trying to show that we we are surfacing more of this this capabilities in console what they are planning to do next is to focus a little more on the repository management so right now when you get OpenShift there are a set of created charts like this IBM products in the developer console and there is a way to modify this release so you want to enable admin to add multiple chart repositories and OCI based chart repositories into OpenShift console as the backing repos and they automatically get pulled and get displayed here so give admins a way to create a helm chart that they want to add to the developer catalog and make them available to the application teams that want to deploy it all right let me go back talk a little bit about helm and operating this is a question I get a lot that all right how you're doing you have been doing operators for a long time and there are a lot of talks about operators do how does this relate to helm and what does it mean for OpenShift to support helm helm and operators and I put it as helm and operators a lot of people ask helm what's the difference of helm versus operators these are really two different technology that solve two different problems but there are a certain level of overlap there in operators framework we have five levels of maturity for operator and this day they map to the different levels of data operations that most organization has to do the first one phase is that when you install the application the second one is operating yet the third one is that when you manage a life cycle of it the storage maybe you have to back up the data put it in the storage be able to restore that data imagine it's a database phase four is when you are collecting metrics about application that is deployed and based on those metrics and alerts you make decision on how you have to maybe chart the data you have to reschedule the way that your pods are scheduled like you're taking action based on the metrics that you're getting out of the application and the last way the higher level of maturity in operators is that when you have full control you have you're 100% managing that piece of software automatically through the through an automated piece of software so you're tuning there your database based on the usage pattern that you see in the application you're adding more resources to the database you're proactively finding issues and try to fix them within the database within that application so and and not all and these are all type the activities that that it ops is doing today about about your applications right these are nothing new most it ops are focused on exactly these activities because they have to run critical software in production what operators do is that give a path to to teams that produce software to automate all the way to that highest level of maturity to autopilot by encoding that knowledge into a software that does that so you deploy your application as an operator and that operator would install it it would also upgrade it it would upgrade a data it would back up the let's go like continue on the database example it would back up the data of database into storage if an issue happens it would restore that it would look at the metrics it would give you guidelines on how to tune it if it would sometimes even apply a tuning itself it would process the workload it would add more maybe memory and CPU and figure out how to short the data automatically so that your application performs best right so all the focus of operator is on what happens after I have deployed the application and on the other hand is a really nice way to package an application and install the entire focus is how do I go from these images that I have to describe the application completely and what it is and deploy it into the cluster and I can obviously upgrade it but upgrade is most of focus on Kubernetes manifest and you all know that like upgrade is very different from update if you have a database deployed on Kubernetes just updating the deployment and to refer to a new image of that database doesn't really do anything for you you have to look at a data export a data reshuffle it normalize it import it back there are a lot of like database processes around the operator so that's the really main difference help us focus on update but operator can do the upgrade on all those automation they are two different technology but when it comes to the install obviously both operator and helm can install application but that's just something that operators have to do because they want to get to all of that data automation that application is required right there their purpose is not to install software their purpose is to manage software well helm is focused on installing software and you see some operators out there that actually use helm to install the application and then use other mechanism for the other levels of maturity that it required they keep monitoring the deployed application the deployed chart they get metrics out of it they modify the tuning and so on so they are they are complementary technologies that target different different problems and try to like a different view of showing that differences as well that helm focuses on like packaging application installation and simpler updates obviously you can update your Kubernetes manifest and operator the entire focuses on capability that are related to that look like managed services that you want to run your software as if it was a cloud service as if it was a managed service so you want it to automatically upgrade the data and normalize it and adapt it and backup and recovery and do reshuffling of the pods and influence the scheduling of the pods based on what's happening in your application and auto tuning so they are they work together really well for the purposes that they are created and for most of the software that you buy or reuse as services to that are consuming your application like the database like the message broker like the cache they belong they are they require capabilities that the operators can offer and a lot of the applications that you build yourself custom apps and you deploy and you develop on your laptop and you deploy into the Dev and stage environment those are usually things that you the helm chart might be very good point to to start with and there is actually if you have a helm chart and the operator SDK offers a path that you can turn that helm chart and create an operator based on that as well and with that I'm gonna finish I'm gonna leave some time for Q and A with the roadmap that we have so OpenShift 4.4 was just released helm 3 we have announced a GA support for it and all the capability around console as we are going toward 4.5 4.6 our focus is on the repo management that I mentioned and also like start exploring multi cluster workflows around helm chart when you have a chart that you want to push to multiple cluster at the same time and make sure that they stay in sync across all these clusters based on a configuration that you have you have defined and and with that I'm gonna pause here see if there are any questions that come so far well I think you did a great job and it was much better than the two minute version that we got a week and a half ago internally so thank you and I think you also answered with the last slide if you go back one slide the first question that somebody asked is this available in 4.3 and he was asking in terms of he's running I think 4.3 on IBM cloud right now and wanted to make sure that he could do what you're talking about or at least part of it in 4.3 so the distinction between what's available in 4.3 around Helm and 4.4 I think you tease it out there nicely and that person let me just see that was I'm going to unmute you Carlos Santana I love that name and if you'd like to follow up with that you had a couple of other questions Carlos see if you can there you go all right can you hear me now yep absolutely oh yeah hi thank you for for the for the presentation I wanted to see what's going on with the Helm 3 and OpenShift so Samiq what what do you recommend in terms of getups like I saw you editing as a sneaky SRE the YAML directly on the OpenShift cluster hopefully this is not a production cluster right that you're editing by hand your keyboards right but that section of YAML that you're showing is that the values the YAML that someone will put kind of in a get repo and then have something like Argo so I was more interesting you mentioned something about in Helm 3 that three-way merge of taking what is installed compare what is the new version kind of the upgrade mechanisms mechanisms because we currently using a getups controller right in this case Argo is Argo using leveraging that Helm 3 logic to do that upgrade or or it's just a simple values.yaml or do you have any thoughts about about like what to put in the the get repo sure so your first question right so there are like two different there is a development phase when people deal with Helm charts and maybe you already for a database in your in your name space and the chart coming from somewhere so that's the view that I showed that you deal with the values YAML that was values YAML in that YAML editor that you looked at you want to modify you want to you keep iterating on that locally and the the OpenShift cluster might not do local but your your locally developing code and you need to iterate on the deployment of the chart and modify that maybe you're creating the chart yourself so that's really that use case for production you're absolutely right that all of that both the chart itself and the values YAML that has to come from a place that is version control it's just very common that you keep that in in Git you have different values YAMLs for different type of environments you might apply them through your CI engine Tecton or Jenkins or something else or like you mentioned more of a GitOps engine that syncs your repo and I know that that Argo CD has that's recognized it has an Helm Applier thing I forgot the terminology that Argo CD uses that that would recognize the Helm chart and takes the values YAML from your repo and deploy it as far as I know because Helm 3 I think I didn't mention that through a decision in addition to the CLI also comes with a client a go client to make it really easy for other tooling to to provide what the Helm CLI does through their own interfaces so I'm guessing that that's what Argo CD does that is using the go client or possibly Helm CLI so I wouldn't expect any differences in the way that merge happens when you're deploying a Helm chart through Argo CD versus if you're doing it through Helm CLI because they would not it would be I'm fairly sure that they are using one of the paths is to to Helm to convert the chart and values YAML to the actual manifold they're not doing it that themselves they're relying on the Helm CLI or the go client of the Helm so you should have identical results if you're using Helm CLI and do the merge yourself do the upgrade yourself thank you and for the specifics so when you you did some edit and shows on YAML is that a a because I didn't see a kind on that YAML is that a CR that is in the cluster now or is that what is that that YAML that you showed let's take a look at it again so when I'm installing the chart this is the pure values YAML right it's not a CR I think you should just have exactly what's in the values YAML and that gets replaced into into the chart yeah but when you go to no when you go to Helm and you see did some edits there where's that YAML story in at CD for example at this YAML yeah where is that YAML stored in the cluster in at CD is it a custom resource custom object is it a config map or a secret this is this is the secret that I was showing you so this is this is stored in in at CD but this is not a custom resource this is essentially a different presentation of the secret data right we're just surfacing the informant extracting it from the secrets and visualizing it in a way that is that is comprehensible in the UI I see it's it's not it doesn't contradict okay yeah I don't want to take a lot of time then sorry you'll be warned and I talk a lot but that's okay that was very interesting secrets is not something that you put in get as a kind I was thinking of a config map but yeah if it's a CR it's a secret I guess it can be put but make sure that they're either seal secrets or there's no credentials right and then you can sync them from from git right so this secret you don't really need to put them in get this is this is just the mechanics of how hell three works so every time you deploy a chart the release object is stored in the name space as as a release so this this you wouldn't put in the in in Git repo what you would get in then Argo CD in your example installs the chart in into the cluster then one of these secrets automatically get created for you by by who by the go client help for activity okay yeah so either by the CLI or by the go operate client yeah okay yep thank you perfect and Carlos I will make you talk so we'll have you back on a couple of other topics and I know that are near and dear to your heart sometime soon to watch out Eric had a couple of points and if he unmute himself maybe hi yes Eric Erlundson I work with AI COE and I was curious that we had done some work with like don't know what to call them it's the old style OpenShift templates for Honda and of course one of the features of that is if you define one of those it'll create sort of like this little web form in the console right for people to fill in all those values for the template parameters I guess I was just wanting to clarify like with this new Helm 3 template stuff is there going to be like a way to fill that out in the console or is it expected that customers will and they want to create one of these actually have to use the command line to do that very good question and the actually should have mentioned this before but I'm glad you brought this up so exactly like you said so this YAML is very error prone if you have to modify this and a lot of charts so let's look at one of the other charts that we have I keep going to this one when you look at the YAML it could be a very very large YAML right and keeping all the indentation it's difficult to do that so one of the things we want to do which was made possible in Helm 3 is to similar to what you expected from like you have seen for the templates we generate a form for this particular YAML that it can enter the values that and what has made this possible is that in Helm 3 for every chart can contain a schema for the values YAML so it could it would contain a schema that would say for there is a field called server name and the title is this the description is this it accepts the strings between eight characters to hundred characters so you can easily generate a form for that with validation and everything so this is one of the things that we are doing going forward I didn't have a timeline so I didn't put on a roadmap to communicate it now but it's definitely one of the areas that we want to add and replace or give the option to go back and forth between a form and these values YAML and the YAML editor okay cool so it's basically it's on your roadmap somewhere to do that yeah that's great thank you I'm going to unmute Ballet J so he can ask his his question next and if you're not talking just mute yourself and you can always unmute yourself and come back in and follow up with Ballet J yeah thanks hey Simon very great presentation my question is with Helm 3 obviously there is an official support from OpenShift and operators have been our favorite way of deployment so how would be the messaging would be and how do you foresee the ecosystem play out in the sense that do you think people will use Helm 3 what it's capable of in terms of initial deployment and updates and use continue to use operators to do what it does best in other words do they use the tools that are best suited for what they do or do you think one people would like to choose one or the other this is also a very good question I definitely see this as that people would pick what fits their use case and this is something that we are already seeing like if you if you look at I have to go to the admin console look at the operator hub and the like the the pieces of software that are made available there and let me like pick like an example like the database cockroach db right so there is a health charge for cockroach db as well and I can use a Helm CLI to to deploy it but from after I do deploy it I'm not an expert on managing cockroach db I know how to use it in my application but I don't really know how to manage the pods make sure it's running within development environment is fine but if I want to run this in production I don't know how to do that so an operator seems a lot more suitable here and that's why an operator is created for it and like iteratively adding more maturity so that it can do more capability around it but imagine a case that I'm building the payroll application within our organization or some other application I don't have a good example right now and this changes on every release a lot so it is not a piece of software that is consumed by other development teams within their software but it's only but rather is consumed by the users of the software through the web UI that it offers right so they are the the type of operation that it needs it varies a lot from version to version every change that we make it's not simple really to have a fixed set of automation rules for how to manage this payroll application so in these cases making creating a piece of software that can manage the payroll application might not be reasonable because you have to constantly modify that piece of software it's used only once for one deployment a chart might be a lot more useful here but something like this database that is installed thousands of times accurate thousands of cluster all of them manage the exact same way it's very reasonable to have an operator that does this instead of all those thousand teams have to reinvent the wheel go learn how to manage cockroach dp and and operational developed operational capability that it need to run cockroach dp in production so I definitely see that is already playing out that way that a lot of the software that you see coming out as operators they already have a health chart but there is a need for them to automate that they to operation that people have to learn and they start creating operators for it yeah thank you so much very nice thank you so much