 Good morning. Good afternoon. Good evening. Welcome to another edition of Ask The product manager office hours today. We're discussing operators and helm. I'm joined by Three of my fellow Red Hatters Karina, you know from other shows on the channel But there's Daniel master here and Stefan the mirror Daniel I'd like to hand it off to you to kind of introduce yourself and then you can pass it off as you see fit Sure. Thanks Chris. Thanks for having me again on the stream. My name is Daniel Messer I'm a member of the open ship product management team and I'm looking after the operator framework step on up with you Hey, hi, everybody. I'm Steven. I'm a product manager at Friday focusing on developer tools for for open chiefs Awesome And I guess I'll give my quick introduction for those of you who don't join me on Tuesdays I am a product manager in open shift and in true open shift fashion. I cover a lot of areas Yeah, so Chris Today we're like Looking to kind of address a very common question at least to step on me That has been and Karina obviously as well that has been sitting with us for quite quite some time now that we have Actually made it possible for open shift users to use operators and helm in the first-class way with open shift. Yeah And it's it's a famous question of Oh my god Here's this operator thing, but also here's this helm thing. What do I do right? Yeah, how do I make the bits work? Exactly Like I may let's and like most people that ask us are either administrators that have like both kind of chains available and what give them to their developers and then the other audience is like application developers that are looking to get their applications out into people's hands on open shift and Provide value, right? So and they also us himself Yeah, this helm thing seems to be really common right and and I can package all my stuff up and then it deploys But then there's also this operator thing Which seems to be having a lot of traction, but it's it has actually a higher entry bar So what do I get in that? In return, right and when should I actually write An operator and when should I maybe just you know package my stuff on that up on them a health chart And who is the person who who writes an operator and who's the person who creates a home chart, right? right, so Stefan and I Gonna like try to walk through today how to Answer that question from the perspective of the end user getting that app as well as from the person that owns the app right that wants to make it available to two other people and Karina has already covered this and in her webinar on screens and In the first bit you want to kind of go take a look at helm How it works in open shifts and like what kind of user experience you can get from that? So I'm gonna hand it over to step on to kind of give us a quick Intro into helm and how it's being used and how it feels like an open shift these days Stefan before you do your quick intro I want to point out that both of these products are like cloud native foundation cloud native computing Yeah, like Embraced projects. Yeah Yeah, completely. So the the helm project is a it's quite a whole project in the foundation I think it start in 2016 so already five years and it has a lot of Contributors the community is widely active There is a Vibrant set of Active maintainers as well on the on the project which are really taking care about it with a high level of maturity So really interesting And there's a lot of things that are going on and I think the belief behind Helm is helping the developer to better manage all their Kubernetes Yamels and all the Kubernetes resources when it's time to define how an application Should be package for Kubernetes and how it should be configured configurable as well So Helm is really a package manager Which provides a better way to manage all the Kubernetes resources such as you can define Exactly how you want to package your application and how you want to make them configurable Daniel if you can go to the next slide and Helm charts is Is basically within Helm you have charts and the charts are A package which consists of all your Kubernetes resources and all your manifest for your application You also have the repository which will be the place where The charts can be stored when where they can be shared and distribute and you have the releases which have A specific instance of a chart which is getting deployed on your cluster And how does it work if we can go to the next slide? So you you basically have the end chart which are Templates so it's taking it's taking all your your Kubernetes manifest and inside of your manifest you will have some template Values and those values will be defined in the values that the ML file Which will provide the configuration of your application and Then you take all of that and it creates the Kubernetes resources So how do you work with this in general is that if we can go to the next slide? That you use the MCLI so you can take a chart and provide the different values for all The configuration option of your your application or you can go to the open chief console where you will have a UI where you can see all the different options as well and then Helm will create the resources and on your namespace and then it will pull Images from the different repositories that you you have So that's how and is working As you can see it's really a convenient way to package your application to make it Template able and to make it configurable when you want to install it. So let's actually give a look of how you can do that So we can go Into a quick demo. So I'm going to share my screen and hopefully This will run properly ensuring dance Yeah, you should be able to see my screen. Yeah, is it visible? Yeah So here in this in this demo, I have created a very basic micro service. It's a status micro service. It is just sending an email From my SMTP server. So it's a quarkus quarkus application And it's using the mailer Tool in order to help me sending the email It's using some environment variables so that it knows what is my SMTP server and what are my credentials and Usually, this is not something that I want to share with my micro service obviously So What I would like to do with this micro service once I have it running on my on my laptop Once I know that it is running properly is that I want to package it and I want to To share it with the rest of my team so that if they want to use it or if they want to reuse this micro service or deploy it Then they can do it in an easy way And they can configure it the way they need So in order to help me into this application, what I did is that I use the g-cube And I think there was a presentation about g-cube Few few open shift TV session ago. Yeah. Yeah, so I will not go into the detail of g-cube but Honestly, it's super easy. I just had a dependency to it in my POM.xml file And I have g-cube there and it gives me a wide collection of build commands and Goals that I can use just to help me Generating the Kubernetes resources and to help me publishing on an open shifter if I want so starting from here, what I'm going to do now is that I Am just going to show you that the application is working. So I'm already authenticated on my Kubernetes cluster On my open shift one, actually And I will create a new project Name MailerDemo Okay, so I'm on this project. So what I did is that I use g-cube in order to generate the manifest and the Kubernetes deployment. So I have that available here for me And And now I'm just going to To deploy my application So you will see So I use my dev open shift profile. I create the OCO resource and then I apply them and I provide also my environment variables for my SMTP server and while it's doing this You see it's just an event command and I'm able to to push my application on to open shift. I love g-cube Really magic So here I'm coming into my project MailerDemo and I should be able to see my application when if my cluster is responding Sometimes I had this as well and it helped to just switch to another pane or refresh the browser. Yeah Maybe I You you're not back in the admin view Right So here you can see You can see the different resources that have been created the route as well to access it and I can click on it. I can go to the end point Call the rest API because it just a rest API is that I've been doing and if I go to my inbox Hello So that's cool. That's working So obviously I want to go one step further if I want to share that with my team So what I did is that I create a g-cube folder into my sources and here in my deployment channel, I have Provide some templates for My environment variables so that it knows when it will create the end charts that this needs to be a configurable options and Then I have a template that channel which provides a default for each of of those but here I just provide default So Yeah, so now I Have just mvn K8s resources, so it will create the resources and K8s Helm and it will create my end charts Yeah So here I have my end chart and if I go into my email I can actually see That's in my deployments Which has all been generated. I have a Template variable and of course, this is very basic. I can do I Could do a much more advanced things thanks to to Helm, but here you you have an understanding of what's going on and And now if I want I could use the Helm CLI to install My charts. So before I do that what I'm going to do is that I'm going to remove my deployments and Also remove my service Should have one service to delete as well And and here from here I can do and Install mailer demo And I can take my the archive of my end chart and Pass all my my environment variables as options for my end chart Install and that's it so now you can see even from here that my service has been great and And You see here. It's a little bit different because on the topology view I can see that it's an end chart that has been deployed So I have some some more options related to my my my end chart Here and I can open it and so so that's cool So I have my end chart. I am able to deploy it on to open shift But now I want to share it and I want to share it so that others can also consume and reuse my micro service So What I'm going to do is that I I have create a end chart Repository it's very easy to to create one. I created using Elm CLI as well You can host it on on GitHub actually So what I did is that I create an end chart repo and then inside of it I took I put a folder for docs where I put the archive of all my charts But basically the end chart repository is just An index.yml file with a bunch of metadata that I like how and what are the end charts So now what I'm going to do is that I will just copy my end charts onto my repository Here we go and so this is my VS code for for my end chart repository. I can update from the MCI I can update My my index.yml and had a one new entry for my mailer Quick start the micro service that I I create and what I'm going to do is that I'm going to push that Onto my repo nice call that should be here here and I can go back into Into open shifts and I can a little bit and What I'm going to do now is that I will add my end chart repository Onto open shift so that I can consume the end chart that I just create and I can share it as well. So I Look for end chart repository here. I will create a new resource for that So as you can see the ML the definition of a end chart repository is quite easy Just going to add the name of my repo and the URL. I create I hit create boom and Now if I go back to the developer catalogue, I go to the end charts and I can see My repository is a happier in here and I Have my charts which is coming here and if I click install I will have a way to provide the configuration for my micro service So pretty cool pretty easy as a developer. I can easily get going I can have some help some help from helm to manage my Kubernetes resources and I can Package and distribute my my application much more easily to my to to my team teammates for example So that's a little bit the mindset that you have when you are a developer. It's really a okay, so I Package my my my application I Have a nice way to manage all my my ML files to make my application configurable for others I can also add as part of My chart some part of the upgrade mechanism for my application But really if you start to to switch your mindset and start to think about how your application is going to operate and how it's going to run once you will deploy it and Have to to to maintain it on production then it's starting to be a little bit Short because it will not manage the full life cycle of your application So you will have to handle that either by and and it's it's fully possible but it's not really a way that is automated and on your ad on your learning curve curve or your adoption curve of cloud native Cloud native adoption. It's not exactly the best thing that you could be probably doing and for that operators are Providing much more advanced capability. It will help you to follow the life cycle of your application Under the the backup the failure recovery You can also leverage insights from how your application is behaving While it's running to to to define some Some ways of how your application should be should be Operate in fact and you can automate that with with operators and that's probably where I will end over to Daniel which will Give you much more colors about everything that you can do with with operators Cool, that was a great demo Stefan. I actually haven't seen all of the helm console in OpenShift yet So that was really impressive to see how much of a great experience that is now also in the developer console So, um, thank you what you've highlighted is that that microservice example was a really great fit, right because um, when people asked me like When do I reach the point where I should write an operator versus staying with packaging technology like helm I I usually give Sort of an academic answer almost where I say if your Application life cycle if everything your application needs to do from having it start up be reconfigured updated and then eventually uninstalled or retired Can be done with core kubernetes features with onboard kubernetes features like, you know deployment stateful sets pbc's services routes and this kind of things Then you should just stay with helm right because then The life cycle of your application is completely in the hand of kubernetes, right and all the primitives that kubernetes knows about How to do a rollout how to do a deployment how to do an update? Um, we'll take care of that, right? So that's why this error here in the in the graphic from helm goes a little bit into like the seamless updates life cycle phase, right? and when you When when all that's when that's all that's needed for your app if you have like a status microservice like step on showed you Then that's actually enough right then that's great Right then helm is the perfect tool And i'm always for using the right tool for the right job and you should stay with helm right because An operator wouldn't give you um any kind of advantage over that But it would be Compared to packaging a helm chart much harder to get to that state right because you need to write the operator But the operator has essentially as a pattern the ability to adapt to a whole More complex and more complete life cycle including what happens After the day one where you're installed and initially configured your application That's where the interesting stuff happens right and as we see more adoption of open shift and kubernetes in production That's where also the lifetime of these workloads will Will get a longer right and and users and customers are asking us to provide tooling and experience for long-running critical workloads on on open shift So in this sense you have questions like how is my application updated? How does it update and migrate its database schema? How is it backed up? Is there kind of some failure discovery happening right? How do I make the application auto tune itself based on a certain workload pattern right so these are things that or kubernetes You know isn't really up to right now and it shouldn't be right because kubernetes has always been written To be a platform of platforms and what that means is it's extensible You can't teach you new tricks to do these kind of things for your app And the operator pattern is that right? It's teaching your kubernetes your open shift system how to run a database over a period of time with all these things like backup restore fail or fail back Embed it in the logic right and make it something that's the first class citizen and At that point usually people tell me okay that makes sense But there's really no discussion anymore right, you know They take away that knowledge and kind of go home and go go about their things So I thought we might need to Add a little bit more of examples right and tangible inflection points and which you should think about As the application owner or developer Maybe I should write an operator to cover that specific piece of my operation operational needs for my application Right, so that's what I'm kind of trying to show you now um So you've probably seen something like this slide for a number of times now It's kind of the internal autonomy of an operator I don't want to go into a lot of details here But the biggest difference between what you've seen from stefan and what you're going to see here is um, and and a health chart is a package way of Your manifest distribution right so it allows you to Template manifests out and that makes up your application and that's get gets your application department An operator is actually a piece of software Um, and that's not your application that you're looking for It's a piece of software giving you this application as a service right and it sits in the system on the cluster itself and it watches for events right and um When it does that and it sees events that it's interested in it can start to do workflows and it can start to do automation And one use case of that is to model application deployment In a way where an operator will introduce a new custom resource type in the system So next to deployments and pods and stateful sets. You now have a thing called enterprise database. Oops So enterprise database right that is now a first class concept in your system It's actually an api a custom resource type in your system that the operator watches for so when it pops up Or an existing one is modified the operator starts to do its work And what this work looks like is something we call reconciliation and that's basically saying The operator will always take a look at the left and see what's specified here I would have a database version three two thirteen size five Um, and what's on the right hand side what's happening on the cluster What is already there to make this request be fulfilled? And it will constantly work towards we were filling that request in an iterated fashion Just as Kubernetes itself works to eventually come to a state where enough deployment stateful sets Secrets config maps and persistent volumes are there to have a database service running, right? And the way to do that sits in that operator, right? This custom logic custom to this specific enterprise database Is codified in the operator and that's a part running in your cluster talking to the Kubernetes api So that's something different than a helm chart, right? So with helm you kind of get the application yourself With an operator you get a service that gives to the application, right? So the difference is really How do I Get the application up and running as quickly as possible with some minor customization Versus having an as-a-service experience So the analogy I usually draw is when people want to have a database on Say aws And they want to have a post-press database. So what do they do? They could obviously say here's a cloud formation template That lays out all the resources virtual machines easy to disk storage, three-drop storage, whatever you need And arranges it in a way so that you have at some point a post-press endpoint running This is the templating approach, right? This is you know, what we saw with helm and also what customized and other templating tools are doing But that's not what AWS users are using, right? What they are using is RDS is a managed service, the relational database service Which gives them a post-press database as a service with all the high-level post-press controls that you would expect From a database service. So an operator is really like RDS But in your cluster, right wherever your cluster runs So what you do with installing an operator and giving people an operating experience is Giving them an RDS like experience, but on your cluster, right? It's it's on demand and it's as-a-service and people select from it from like a service catalog type fashion, right? So that's kind of already a difference in the end user experience And you want if you want to have one or the other you choose helm or or operators The other aspect that's important to understand is like where do things run, right? So what what um step on short was basically running the helm client and creating resources on the cluster And the helm client run ran on on stephans notebook, right? In the case of the open shift console actually runs in as part of open shift But normally helm is kind of an external tool that runs outside of the cluster and that's where all the logic is Right and the tool only runs when it's invoked, right? so either user invokes it or some sort of github's methodology invokes it and then it renders all these things and hands out the resources to the cluster and that's when it's done Right, so it uses the kubernetes apis, but in a one-off fashion A kubernetes operator runs in the cluster And it can be used by multiple users So if you want to use a helm release among multiple users all the users need to have ideally the same helm client As well as the same chart version, right? Otherwise you're going to have conflicts With an operator all the logic sits in the cluster the binary that has logic runs in the cluster and can be used by multiple users It watches the control plane of kubernetes, so it can see all the events that are running And it can use this capability to get you an application Or to do other things so it's not just application deployment with kubernetes operators You can also use this to trigger event-based automation Or to even talk to an external api like an external cloud service Right, so if you want an open shift to consume an azure hosted managed database You use the azure services operator and its apis to get you an instance of that and the operator will facilitate that on your behalf Using the azure apis it actually deploys nothing on your cluster All it does is reacting to your request which you place in the cluster And responds to that by giving you the credentials on that request Right, so that's also a different nature of running software and kubernetes operator can therefore do more than application deployment It can also automate stuff on the system. It can also use external systems to do that And already said that it's actually constantly doing that So it runs in a loop and it notices when things are changing in the system So when your application requires you to change and react to system changes Beyond what kubernetes automatically does with rescheduling ports and so on Then you would write an operator to watch these events and trigger custom code, right? So this might be a little bit abstract But it's really important to understand that the operator runs all the time versus helm Only runs when you invoke it and then it's done You have no chance to do anything with helm after that unless your helm run the helm binary again But if I go in and delete a part of stevan's Mailer demo deployment like the service There's nothing that notices that in the system, right? It's only getting restored when you run helm again, right with an operator That would be something that got recreated automatically within a couple of seconds But maybe let's take a different look at like what does it mean to react to system events? So i'm going to quickly Share my open shift cluster here in which I have a very interesting operator installed that i'm really come to like it's the namespace configuration operator and it's something that ships as part of open shifts community catalog and what this thing does is help with onboarding of new users and projects And specifically what it does it can watch for new projects getting created and ensure That certain things happen in that state in particular certain resources are standard By standard applied to this project and the most common case Is actually default resource coders, right? We all know the enemy of a stable clusters are all those projects and namespaces that have no max quota at all Right, they will just gobble up all the resources in your cluster and Can you know create a huge noisy neighbor syndrome and the knowledge in inverse case So the namespace configuration operator allows me to define what should happen when a certain namespace appears And this is done. We are an api called namespace complex. I created an example here And if I go there I can see that what it does it Let's me provide a label selector To match namespaces to which this default set of resources should be applied And so the label selector is one called size equals small And there is a number of objects that are always going to be applied When those resources when those namespaces are created with that label One is the resource quota, right? I want to limit these namespaces to max four parts To maximum usage of two cpus and two gigs of memory and four gigs of the firmware storage, right? So that should be by there by default and I should not need to make that happen You know manually with the admin that provisioned the project for the developer for instance Or trust my developers that they will create reasonable resource quotas themselves because likely they are going to throw in much higher values here, right? Developers tend to overstate their requirements to be on a safe side The also the other thing that it does it actually configures this namespace with an egress network policy to be agate So they cannot reach the internet. This is something that is important in security sensitive environments So these two objects based on the operator would be created automatically whenever a new project is created So let's go ahead and actually try this real quick. Let me just create a project here that's called developer unlimited and This is going to be just for illustrative purposes that by default No resource quotas or egress policies are applied when you create a project on open shift Something you should probably know about when you give open shift to your tenants and your colleagues, right? Because by default I can do anything in the system in that namespace in terms of requesting resources right, so what we want however is a more sane approach where By default there are going to be some limits, right? So let's create another project. I'm just going to do this with yaml for the sake of the presentation Demonstration where I'm going to create a project called developer sandbox with this label, right? I'm going to hit create And now what you see is that A resource quota has been applied automatically And that's because the operator watched for namespaces being created in the system And when that happened and it also fulfilled the criteria of that namespace needs to have a certain label Called size small then it started to place these objects So here's the resource quota. It's the exact one that we've seen before As you can see here in the specification it limits you to two cpus At max and it limits to you the four parts at max, right? And developers who might want to circumvent you as an admin can also not delete that right the operator Will just restore that as part of its reconciliation, right? And this is this event-based automation that you can do with an operator. This is not about deploying an application This is really about making workflows happen automatically based on certain things happening in the system And because the operator is in the system running all the time Running in the loop you can actually do that, right? So I could try to go ahead and Delete that here. I didn't try that before. So this is a true lifetime. I know I expect that thing to come back And there it is. So that didn't even take a second. So this is really aggressive. So as it should be right, you know, this is, you know proper tenant isolation So this is this is one use case of an operator besides just application deployment, right? So this is really useful and it's and it's hard to do when you're sitting outside of the system, right? This is best Something that runs inside a system watching over your cluster at all times. So That's one thing the power of that reconciliation loop is nothing to be sneezed at right like it is Really a game changer Exactly The the loop nature is really what makes it a different kind of software, right? And this is also different to you writing your own like ads for playbooks or bath strips That run in a in a cron job fashion outside of the cluster Also works, right? It's also a viable approach, but this is a much more predictable and versatile experience. So as you can see with Helm you have a very um Predictable and a standardized workflow that everybody came to love which is download the chart Customizer values that Yama deploy the app reconfigure it with helm upgrade and eventually retire it, right? And that's why helm is super popular because every application package helm kind of works this way So you learn it once and then you can, you know, apply that learning to all helm chunks basically, right? But it also will Usually not do much outside of that, right? So on the right hand side, you see all the things that you could do or an operator Um, which is of course request the app or reconfigure it or update it But also automate these workflows as you have seen in the demo before right or request external services It's like from your cloud provider who react to cluster events do more complex sequence things like backup and restore and Failure when failed back. So it's not just day two. It's also day one operations But also integration with external systems and this automation character, right? so um, that's kind of what I wanted to See uh, show to you, um A spot of different nature. Here's an additional aspect that gets forgotten. Um, and it's especially important for maintainers of apps that want to give a great user experience to their to their customers, right? So if um, if you're using helm your RBAC The access that you currently have in your cluster defines what the helm chart can do and what the helm chart can deploy so if you are Deploying something that requires cluster admin Let's say your helm chart creates a storage class because you are deploying a software defined storage in your system That would have you required to be cluster admin, right? So there's no way you can give this to an unprivileged user because helm runs with your permissions, right? So most of the things that are packaged as helm charts are actually therefore scoped down to be in the namespace and for user and that's fine but With Kubernetes operators, you have the ability to have a different set of permissions Be used by your operator versus what your users need to have And what the application eventually gets, right? Remember that in Kubernetes you cannot give more application permissions that you have yourself So with an operator you can come around with that a little bit because you can have unprivileged users Use a privileged operator that deploys things in the cluster So that these users don't need to have those privileges, right? So it's kind of a privileged delegation mechanism That allows the operator to carry out more privilege tasks without having you as the admin give those users Those privileges and that as an application developer is important Because I want my application to be as used by many people as possible And I don't want people to be stuck in environments where they are not cluster admin So that's why You have your two choices here, right? If you want to do one or the other So how does it do its job? This is where it becomes really interesting because What you've seen with what halm is that once halm renders the chart it essentially throws all the resources on the cluster This is typical kubernetes, right? It's give me all the end state and I'll try to reconcile as long as possible Until it reaches your desired state. So it's this eventual consistency, right? And it's this goal seeking nature of kubernetes And for things like this microservice that we saw or in general stateless applications. This is just fine They don't need anymore and that's why halm is so suitable for this, right? It runs once and it deploys all these things At once in parallel and essentially let's kubernetes do its thing, right? There's limited room for Customization on how that works. There are pre and post hooks. You can run to perform some basic lifecycle beyond just Deploy everything but at this heart it's that On the right hand side, you have the operator because the operator runs code You can sequence stuff. You can do the first thing wait for it to finish You do the second thing wait for it to finish before you do the third thing If the outcome is a or the fourth thing if the outcome is b, right? So you have conditional logic in this operator that enforces ordering And consistency and weight states to ensure the integrity of the application especially over time, right? So if your application needs that ordering Then it's probably a good idea to use the operator pattern Because it's going to be at some point hard to do this with helm and Maybe let's take a look at them at an example of an application stack actually that has some of these requirements So you might not notice but but i'm also the product manager for something called quay And quay is a registry platform. It's a container registry And we don't need to go into details on what it does It's suffice to say it is a python application That has certain requirements. It needs to have a redis database needs to have a postgres database It needs to have some object storage and then it all comes together With a with a config map or a secret Which you attach to your parts running the quay python scripts In a scaled out way in order to get you and highly available registry so Um, let's say i'm the maintainer of quay and i want to like give this to you, right? So what i need you to do in order to get this up is you need to get yourself a redis You need to get yourself a postgres an object storage Then you need to run something that we call a config editor Which is essentially a ui that generates the config file You technically don't need this the only thing that this thing does is actually creating the initial database schema that quay needs to run So when you have done all of that in order and waited for it all to be ready You essentially then download this config file injected into your parts and off you go Depending on what Features of quay you want if you want to have security scanning you need to actually run a separate thing called clear first Wait for that to be up and online and then you run quay and initially Subsequently if you want to use another feature called repo marrying you also run additional workers in that So not really important what these things mean Just know that there needs to be certain things that happen in order for you to actually get the deployment. So If i would do this with helm This could very much and very possibly go like this, right? I go, you know, helm install get me a redis helm install get me a postgres And then also how install get me an object storage, which in this case is something called minio Which is a small scale object storage that's just sufficient for demo needs And all these helm charts are going to deploy all these things And i'm going to spit out some instructions on how I get to the service and just deploy it get the credentials just deployed I you know take notes of all of this and write it up and then Essentially joint launch the quay config app, which gives me a ui in which I entered these credentials So the quay System validates that it can actually access the database or the object storage And then also creates the database schema these things only to run before you run quay. So Trying to get to my config app here Switching screen chair real quick back to open shift Oh, the application isn't available. Well, that's kind of typical for helm, right? Because helm doesn't know That the config app needs, you know, maybe 10 seconds to come up You know, it doesn't wait for the part to be scheduled. It just spits out the data and it says Wait like a couple of seconds. So if I refresh this, this is just going to be there. I'm not going to see a ui in which I can enter All the data, right? So this is how this could possibly go with something Like helm and for developers, this is probably fine But if I if I ship this to customers, I can count On it that 20 30 percent will get it wrong. We'll use something unsupported or just type in the data wrong and try to automate this and it's kind of hard because Extracting all these credentials from the helm charts that you've just seen Is a little bit manual, right? You know and doing that and in github's fashion or an answerable paper is kind of hard so Maybe there's a different way, right and this different way could very much look like this where you Actually have a An operator that does this for you. So let's switch namespaces here a little quick Go to quay operator and go to the list of installed operators and you see you have to quay operator installed That thing knows how to do all of the sequencing and all of this provisioning Has it's coded out. So I just go in here as a quick quay registry And it gives me a form which is generated by the ui based on inputs that the operator supports And if I switch to the yellow view, you see it's just another Kubernetes resource, right? So if you're on the command line, you could also just do this with cube ctl And this is how I tell the operator to do something, right? I basically say these are the quay components that I need I want to have security scanning the database object storage yadda yadda yadda And the operator goes off and deploys all of that Right, so let's say I ask you to not give me a database because I want to have a external database, right? so I just set this to off and Say create I could essentially also leave all this out here and say This is the config spec, right? This would also work. This just makes an all-in-one quay deployment So very easy to do, but let's say I want to deploy this without a database And the operator transitions to a state where the condition rollout is blocked appears And if you go on the Resource that we just called example for the sake of the demonstration And scroll down you see the reason So the config is invalid because I asked the operator not to provide a postgres database, but quay needs a postgres database So the config bundle in which it would normally retrieve whereas my database doesn't have this information Right and subsequently the operator actually stopped doing anything, right? So there are no, you know Additional parts here. There's there's No additional crash looping quay ports. It's deploying clear which comes with its own database here But there are no quay parts that are crash looping because they find don't find the database It's because the logic of the sequence. I've shown you in the slide earlier is kind of hard coded in that operator So now as a user what I'm actually doing is I'm going to the config app that the operator automatically created for me And I'm now Going to give it the postgres credentials of my external postgres database I'm not going to do this now for the sake of time in this stream, right? But that's how we'll do this, right? So and that's the kind of interaction that makes it really nice for users to discover how your app works Rather than fading Hard and creating all kinds of errors. It kind of goes to a state where it's immediately clear what you need to do And like providing that logic inside the operator is truly A gift, right? Like you're giving a gift to folks on how you intend this operator to be used long term and kind of You know Helping them along the way and not having to worry about maybe those edge cases that the operator knows about and can then maintain You know if a state changes or something. It's good, right? Like it's going to put itself back into a good Spot that so I get less support calls, right? This is what I want, right? So Reconfiguring quay is actually quite easy You could you could kind of do this with helm reconfiguring quay is just updating a secret or config map and then bouncing your parts It kind of turns out that bouncing a post doesn't happen automatically So if you have a deployment that mounts a config map or a secret and that is subsequently updated Kubernetes does nothing to your deployment, right? So Helm has a little trick under its sleeve to make the deployment bounce And you can look this up in documentation. It's actually quite nifty But what you do is you track a hash of your config map in the annotation of your deployment And because this changes when you'll obviously update a config map This also knows let's how no it needs to be deployed to the You know the the clear or quay deployments in this case But it doesn't do that in order, right? So but it needs to happen in order because clear needs to be up first before quay can start, right? You may have applications that have this requirement So this is kind of doable in in helm But it's something that the operator can just take care about in its internal logic. So you don't need to And now comes a truly difficult bit, right, which is how to update quay So quays state a scale out pipeline application over a database. So it's kind of an application stack, right? and you could have You could have a A helm approach where you use helm charts for all of the individual components And then you let to deal the user with all those individual components with different helm releases What the operator does it looks at all this thing as one stack, right? So in order to update quay what you would actually need to do is you need to scale down your deployment to zero Then you need to run exactly one version of your new Quay Exactly one pot and what this will do it will create. Um, it will execute database migrations So from one quay release to another you might actually, you know, change fields in the database create new tables Fee name columns and so on. So these things are automatically migrated On startup when the first quake pot starts up, but it needs to happen without any other Quay instance or any other client really accessing the database because they might catch some influx state of the database Which doesn't make sense to them. So you need to scale down Let the database migrations Run this may take all the way from 10 seconds to a minute And only then when that's finished and successful You scale up to new quay deployment. Otherwise, you need to hold back, right? And doing this with helm in a single command or just within one helm chart is extremely difficult you would need to write a script or some sort of program that scales down your deployment and runs as a pre deployment hook in in helm And that needs to then finish and wait for this stuff to be scaled down So another job can run that does the database migration It's not impossible. But at that point you are programming, right, right? And if you are programming you might just as well program an operator that that's this in a very very You know consistent way. So the way this is done with With with the operator is at least in the case of the quay operator Very straightforward. What you do is if you want to update your quay So I have another deployment here that's already running So if I go there, I'm just going to quickly show you that it is indeed running the version I tell you it's runs So open the ui here. I'm not going to authenticate now, but you see it runs version 3 4 3, right? So going back here What I want to do is updating quay. I want to all we want to make all these stands appear automatically So the way I do this I update the quay operator and I say you're not version 3 4. You're now version 3 5 So I switched the operator to another channel, which is like channels in your browser. So you get new versions So every quay version has a channel in the operator in order to operate. I changed the channel to free pipe And that will make um the operator left second manager and Kubernetes update that operator This triggers a little glitch in the ui where it will Take like a couple of seconds to come back But you will what you will see is you will have a free free five operator in a couple of seconds And that free five operator automatically starts to migrate all the quay deployments Right. So if you go here, um, you can see the quay deployment is still there On the pods are still running but they are starting to update, right? So this is your operator automatically updating your quay in order in that order that I've shown you You know scale down run the migration scale back up I just had the single plot because nothing is going on in the system right now. So if I would Go back to quay, um And this may may take another five seconds or so eventually I would see Um a free five down there. So it's still not ready. Um, still updating and that's probably something we should fix in the operator to not say The old operator is gone until the new one is there But that's kind of the in-between state that you that you have to accept in the database migration But eventually it will say free five down there, right? So Um, and and that's happening automatically right and that happens automatically for all your customers and users And that's how you Get less support calls. That's how you make people run into less bucks because everything is managed in the way Your application needs to be managed, right? So I'm not going to wait for this here. Um Just to uh continue and come to an end um So when you think about what do you want to use? You have these two choices. You also have other choices, but let's say you have the packaging part and you have the operator route, right? Packaging is very low effort as we have seen it literally takes one command with that maven plugin to get you going And you can get quite quite far with this already in use experience You can make the installer breeze and you can also do simple updates, right? So that is very low effort And gives you gives you some coverage in the lifestyle of your application. So you have a good consumption experience If your app has more needs however, right? If it is not just stateless if it has this kind of Lifestyle that I've shown you with updates and reconfiguration If it needs sequencing ordering orchestration user input, right and conditionals You're going to write an operator and that initially seems like a much higher bar to meet You need to actually, you know, write code, you know, you're signing up for maintaining apis and compatibility contracts So in order to do what helm does initially it's much higher effort But eventually in the long run if your application has those needs And you want to provide that as a service experience that we've seen before or you want to automate raw flows in the cluster Then the operator eventually gets you all the way there, right? So that kind of is some guidance on how to You know navigate between the worlds And it also depends as you we mentioned earlier about what kind of application you want to package If it's very simple application or if it's stateless Microservices and things like that. Helm is a is a is a good fit and I also believe that it also depends on How much you want to automate sometime It's not needed to to automate everything. So There is that trade-off, right? Like if you spend a whole day automating a 30 minute task That's one thing but if you spend a whole day automating a 30 minute task that happens once a week That's going to pay off at some point, right? Obviously, yeah So meanwhile the quay has updated, right? So it took it took like something like 45 seconds or so A little bit long into a demo, but eventually succeeded, right? And it's doing this for all the releases So previously it has been a pain for our customers to do this because they needed to do this dance themselves And now with the operator. We are never getting any complaints about this anymore, right? So um So that's that's something that you can get out of that experience, right? So Now people say, you know, when do I use walk? But I think you can safely say when you're on the beginning of your application And you don't quite know how it's going to turn out and how much it actually needs, you know Something that comes just fine, right to get you going to quickly share it in the way that stephan shared your application So Actually, do you see the slides? It should So like you start with a helm chart, right? But then at some point you realize, oh Now this thing becomes more complicated and I need to do more complicated things that I can't do without of the BossCoop features Then you can look at the operator pattern, right? And we have a project called operator sdk as part of the operator framework That helps you leverage your existing helm charts and put it into another operator So That immediately gets you from just a helm chart to an operator. It just doesn't have any more logic at this point So that's something we aim to do And allow you to do in the future where you will just be able to reuse your existing helm chart in an operator To which you then add additional code to do any of those phase three and phase four things, right? And this could be Golden code could be answerable playbooks There's something we call hybrid operators And this is something we plan for the sdk to be supporting this year and allows you to mix the matrix existing helm charts So your investment isn't gone there To a operator that can truly do these more complicated things which you would normally ask users to say Oh, yeah, don't do this with how do these five cube cdl commands and off you go, right? That's kind of what we are planning to do in that sense and that would allow you to do To use best of both worlds, right? You know use helm for the simple Deployment stuff and use the operator for the more complex orchestration stuff or the automation with cluster events or off cluster services And giving the best of all the world Like the simplicity and the ease of access of elm to the developers is is also key for the adoption And the adoption of your cloud native automation journey as well Right, so there are a couple of resources to follow up on this and if you want to learn more about both of the technologies and patterns and Yeah, I think with that we are like at the top of the hour Yes, we are and there's a few more minutes we can run over if folks have questions here But yeah, there's a ton like the kubernetes operators book That was written by Jason doves and josh wood is highly recommended content, right if you're dealing with operators in my opinion That book kind of cracked my brain open into the fact that like I can do this because Even though i'm not a programmer. I know ansible and I can make You know a whole full-blown operator is out of ansible now and there's some Even built into open shift that are written in ansible and that's really kind of like Heartwarming to me as somebody coming from the op side because I spent years using ansible and now coming to kubernetes It's like where does that skill set bring me and you know with the operator pattern it could take me a long way It's pretty cool and for helm Yeah, sorry I don't have my ipad so I can't hold it the book no Andrew block and austin duee that most of you know Wrote that one and then the helm maintainers also have a book out So I mean between the two of them and then all the other resources for helm Yeah, there's a lot of resources available out there if you're struggling Feel free to drop in on our discord at any point in time ask your questions there We can find some answers for you probably and you know Always check out the kubernetes slack. That's a great place to go look for answers or you know find like-minded people That are doing similar things to you Any other resources for example that we want to like mention There's you have these two pages back here on that so guys. There's ton of documentation Right. Um, there's also a lab on learn.openship.com for how and also for operators This is really cool because you kind of get access to a to an all open your cluster just in your browser Nothing else needed to install. So I really like those interactive labs And then yeah on the kubernetes slack. We have the helm channel Kubernetes operators channel And then framework operator framework we've touched upon in this stream as well at some point is Having his own website here already. Yeah, I've been dropping links and chat left and right. So yeah, if Can you share these slides with me so I can stick them up on our slide chair at some point today? Thank you And folks. Yeah, if you're looking for the slides, I'll have them up in the next 24 hours for sure And yeah, thank you daniel. Thank you staphon. Thank you karina for coming on and showing off You know operators and helm and how they can work together to Extend clusters and you know make people's lives easier and better You know, I don't like support calls in the middle of the night. I'd rather have an operator take care of it for me I can programmatically do that That's powerful So thank you all. Thank you for tuning in Coming up here at noon. We're going to be talking about a venture of an applications with I can't even say the name kajito serverless workflows and k native so Yeah, if you're interested in serverless check it out and uh until then stay safe out there folks Thank you everybody Thanks chris. Thanks. Thank you guys