 Hi, everyone. Welcome to our session today at KCD Ukraine. We're soon to be here today and we're going to talk about how to avoid the Dev Plateau with CRDs. So first, let us introduce briefly. Here is Guy, our awesome and brilliant solution engineer at Commodore. Hey, Rona. I'm very excited. And with me, Rona, one of our most amazing developer. And although we are the first operation platform when each one the organization Dev DevOps can access and get full visibility, troubleshooting, and advanced capabilities to be successful on your day operation with Kubernetes. So without further ado, let's get started. Imagine Kubernetes has a vast ocean ever expanding and infinitely deep. But something is missing here, right? There are almost no coral here. And at the end we want to create a healthy ecosystem in it. So what do we do? We create a man-made object into the ocean as a base for ocean life like coral and fish. And once we create a base for other life to succeed in it, we're getting a thriving ecosystem. And why am I talking about it? Kubernetes is the ocean. Then the man-made object is the platform or CRD. And this is the thriving ecosystem that created using CRDs. So if we go back from the ocean into land, let's talk about how Kubernetes adoption looked like when Kubernetes just started. So the early adopters were not just small startups, big money, but also large companies with great talent and budget like Google. But in between these small startups and Google reside, most of the companies, right? Most of the companies had to face big challenges in order to in its early stages. So Kubernetes took an approach to add more core functionalities for wider adoption. They've added things like deployments, stateful set, and other application capabilities that allowed the organization to use and adopt Kubernetes. So therefore Kubernetes became the dominant platform, with a lack of enterprise concerns. Supported demanding challenges for adoption, Kubernetes is adding more and more enterprise capabilities. If you look at recent versions, you'll see that security and data are now top concerns, and they have been changing a lot in recent versions. And Kubernetes reached more on the enterprise rather than on the core side, which is fine, because like any other technology, it went into the phase of, what will take us to the next step? Exactly. And when we take a look at the capabilities that added to Kubernetes and still adding, we can see that some of the core capabilities were improved much better on the early stages on the left side. And now we can see that more and more enterprise features are coming in. It's not bad. It just means that core features, core capabilities to make our application orchestrate much better are a little bit reduced in order to make wider adoption for different organizations in different stages when they need like more security and storage and capabilities along its adoption. So it's a phase of turning adoption into a different level of organization. And when we go to two major releases of Kubernetes, the first one is 1.9 and the other is 1.20 recently released, we can see that on the left side, most of the advancement were on the application and the non-communities workloads at this stage. So can you imagine your cluster without deploying a truthful set, demon set, or replica set? There is no cluster that can be run and operate without those capabilities. Also, we can see that Kubernetes open to a very new workloads in this area, like running on GPU, Windows machine, open CRDs. And when we take a look on the other side, Kubernetes 125 is very focused on the security and the storage side. And now it gets more stable moving into this phase. So the focus from the core side to the next stage of adoption. And the fact you need to adapt Kubernetes, it's not only about running applications. Kubernetes is orchestrating the containers, the networking and everything regarding that. But the fact that in order to adapt Kubernetes, you need to face more than that. You need to face, how do I monitor my cluster? How do I auto-scaling when there is different auto-scaling methods that I want to put in place? You need to think about observability and you need to think about cost reduction, optimization. All the things that Kubernetes will not help you to face. They are facing in a very, Kubernetes-solving in a very basic level. If you want to effectively and Kubernetes, these challenges. And those challenges are not nice to have. You may want or maybe want in the future or can be a little addition to what you have. Optimizing without monitor, without adding observability, you will not get the value that you are looking for. So those are, for most of the organization, must challenges to face with. So what we need is a little different approach to those because if Kubernetes will not help us, maybe something else can help us. So for example, let's say I'm running a gaming application and we have a customer that's using that and their example with you. So in our gaming application, it needs to be scaled based on the number of active users. If I have one poll, it can support only 10 users because the backend system going to run most of the every lifting of this. So obviously when we need more scaling for these users, we cannot rely on CPU and memory. We want to make sure that we have the number of active or concurrent user within our platform. So for that, we need a new methodology of scaling which is not come out of the box from Kubernetes. But we do want it to be auto-orchestrated and we will have a controller that will help us to do the scaling, something goes wrong and will help us to orchestrate that. We do want to make it Kubernetes native configuration so we will be able to apply it with our application, code, image and any other configuration this app requires. But we also wanted to make it custom and flexible. For Kubernetes, we have this strict API like the deployment one that every field matters but we want to have this custom face. We can extend it at any time and we can use it any time. So for example, if we need to be more users at some stage, then we will be able to create pre-bakes, maybe pods for that. So Rona, maybe do you have some kind of idea for a solution that we can apply in place? Sure. So my solution is obviously, and thanks to Kubernetes, we can add custom resources such that everyone can create their own custom resource and a controller to extend the Kubernetes API with me. This gives you tons of flexibility and customization so teams can customize their own platform based on their needs. So users can use third-party tools that utilize CRDs such as rollouts or custom controller like Guy's Gaming Autoscaler. So if I were to install Guy's Gaming Autoscaler on my cluster, I could just run Cube CTL Get Gaming Autoscaler and the same Kubernetes API will give me all the information it has on the Gaming Autoscaler on my cluster. The same as if I were to run Cube CTL Get Pod and that is just... So if we want to talk a bit about how it works, when a user creates... sorry, uses custom resources that define the desired state and configuration using custom resource definition and when a user creates an instance of a custom resource Kubernetes adds it to a CD and then this custom resource can be accessed and managed through... endpoints just like any other Kubernetes resource. And if you think about it, CRDs together with operators give you almost unlimited possibilities and you can do basically whatever you want. So this is really nice in theory, right? But let's give you more specifics of it and how we can use other tools and CRDs to face some of the challenges that we talked about just now. The other tools you mentioned is very interesting. Yeah, they are. So Guy and I actually wrote some simple and small application, right? Two deployments, each deployment has two. And now we want to scale our consumer based on the number of messages in the Kafka topic. So this cannot be built in HPA in Kubernetes. So we need another solution and Cata will allow us to do the outer scaling. Which is great, but it's not the last challenge that we need to face. We have many more as we saw in the list before. So at some point we were required to add more security to our clusters and one of the things that we had is to deploy certificates and certificates is a big concern because you need to manage them, you need to create them, you need to renew them. A lot of demonstration that you need to do in order to use certificate efficiently. So we utilize one of the tools, CERT Manager, to manage secrets within Kubernetes in order to help us. So we deploy CERT Manager and then we create certificate under a Kubernetes resource. So what's next? That's very cool. So Guy and I have a monitor that we configured manually and we want to pack our application together with the monitor's configuration all in one. So we want to store it in Git. So we will have history, version management and more like GitOps implementation, right? So we're going to use alert managers, CRDs and we can configure like 3, 5, 10. And that makes it very awesome, but it's not the last stage. So we do want to configure CD properly. We want our deployment to not be the script that someone wrote a year and a half ago. We want to have a tool that will take our application, we will define our application, it will take deployment automatically. And a very common tool is Argo CD. We can deploy Argo CD, we can deploy the application of CRs in there and we will get all the things that we can bring in a Kubernetes native way, which is great. What's next? Okay, so now we want to run for configuration policy and we definitely don't want to do it by ourselves. We don't want to write a crazy program. So we're going to use Kiverno, which is a policy engine for Kubernetes. Amazing, amazing. And we know how much effort is to integrate those into the CI CD pipelines and actually make them really spring line and maintain them over time. Next, we want to add more visibility into our clusters and open telemetry is kind of a telemetry and metric tool, which is popular in these days. So maybe we want to use some operators in the CR, configuring the collector in order to deploy open telemetry, but also configure all the things that we need in place. And now we find ourselves with another tool and CR CRDs in place. Nice. So what we did is that we installed Grafana using Helmshots, but now we want to move into an operator that will help us with the day-to-operations of Grafana. So luckily CRDs cannot manage only specific things. They can also manage these or any other third-party tool. In this case, we're going to use it to manage our own Grafana. Cool. And now we will not need to do all the day-to-operations manually in our Grafana cluster. Right. And after some time, we find out that when you work with Tempods, rollout of basic Kubernetes is pretty fine. But when you, maybe, canary deployment and something that will get more advanced rollout strategy, there is no way to do that out of the box. You will need our own. So maybe we can deploy Argo rollouts and then make it more advanced configuration to rollouts. This example is that we are going to demo it in a few minutes and show everyone how we can do the migration and utilize the amazing capability which are not out of the box from Kubernetes. Argo rollout is great. And you'll see the demo, you'll like it. Now we have our application is now becoming bigger and we want to move some of our infrastructure components so we can back with our own application the managed database that it needs to run or manage Kubernetes cluster better multi-tenancy. So, but to create this in Kubernetes in a native way. So we're going to use... Amazing project and very popular at the moment. And when we want to go to even more complex architecture, we want to have complex networking architecture. We want to put maybe Istio, some service mesh tool in place. Or we do want to go into a multi-cluster, multi-region. We definitely find ourselves using more and more CRDs and tools in order to actually utilize and extend Kubernetes to make it something even bigger than the project already is. And in a simple cluster that we are taking a look at, we can find that in the middle we have the Kubernetes that can be Kubernetes itself, deployment pods. But we find that we add more and more components and we extend Kubernetes to be very big. And what we think is that when we are facing that, the challenge that no more core activities will be added as it's before, CRDs will be the next phase. And you will find a cluster with more and more of those. Yeah, it's amazing that these core features to a more robust and a great, bigger cluster using only CRDs. Great, amazing. So maybe we'll show a little bit about an example. Yeah. That would be great. So we are jumping in. We want to show an example of a blue-green deployment. And so what happens with blue-green is that at some point you want to understand about our preview and how to switch between active and make sure that you have preview version that you can take a look before rolling it out. But on the other side, you can revert very fast because you still have the other up and running. That's very important. And the second thing about doing this kind of rollout is that you need to do it manually at some point or develop it on our own. That will take so much time for us to put in a new environment, put in preview, switch to the service, think about all the other areas that we need to take care of during deployment, rollout. And what is interesting that using Argo rollout, instead of writing everything on our own, we can use some third party, and it will be able to use it in order to get all the value of blue-green deployments that we offer. So what we're going to show you is deployments and we change it to Argo rollouts and utilizing blue-green deployments. So just let me share my terminal. Okay. And now what we are going to do is we are going to take a very basic deployment. In this deployment, you can see that we have just an NGINX with 10 replicas, the default rolling update capabilities. We just added a small delay to the readiness just to show you how fast we can change and what are the capabilities of changing it a little bit different. And now we are going to apply it. And what's going to happen is basically that the pod will be created, the deployment will be created, and what we want to do next is to take a look about how this rollout is going to look like. And when we are going to wait for these pods to be ready, we want to explain that the rollout of Kubernetes is done in a way that 25% by default, 25 of the new pods are going up and 25% of the old pods are going down and then the rollout switch steps, which make it hard to revert because you don't have the full capacity of pods in a previous state. So now we can see that all of our 10 pods are up running and ready. We want to deploy our new version. And then just changing the image, nothing special in here to allow us is to see the pods. So we can see that at the top, the new pods are joining. They are not ready. It will take a while. Other pods are already deleted. So it's not a blue green deployment. We already lose some of our pods. The Kubernetes basic capabilities really not allow us to utilize or fill in what is required for us to rolling update. So what we want to show is the difference between the Argo rollout and how you can utilize on your own. And this will be maybe good for pods, but if we increase the number of pods and use like 100 pods or more, it will not work. We need another strategy. Exactly. And what we are going to do later is we are going to deployments. And we are going to switch it to Argo. And what's the difference between those is that in an easy way, you can see that the configuration of Argo rollout Sorry. Show you the previous version. You can see that the configuration of Argo rollout just require us to a version of kind. Everything else is pretty similar to a deployment except of the strategy. So instead of writing our own code, we can just configure the strategy and get these blue-green capabilities. And before the demo, we just installed Argo rollouts. And then we are going to go to the website. And what we will do next is we apply the first manifest. And then we are going to watch the rollout. So beside all the nice capabilities, we also get a command line that will help us to see what is the active version, what is the stable version at the moment and what is preview. Obviously, because it's the first version, and what we are going to do next is release our new version. So let's wait for all the pods to be ready and then... Now everything is stable and active, which is great. So what's next? Let's apply our second step. Our second step is just the change of the image. What we want... Yeah, it's the same change. But now we can see that we have two versions in here. The version at the bottom with the previous one with the stable and active capabilities. The configuration is stable and active. It's been all the... those ones. And the preview on the top, which is going to be switched automatically by Argo rollout. And you will see that in a few seconds the rollout will happen. The other pods will be changed to the new version. The other pods will be kept for some time in order for us to be revert rapidly if we want to. So rollout is exposed. This is the preview version. The previous version is stable and active and it's going to be very fast. Oh, there you go. Stable, active, new version. And that's allowed to be re-deployment of Argo rollout without doing basically nothing. Yeah. Changing almost anything. Yeah. And that's the example of how you can use CRDs and third-party tools easily without developing your own code. Awesome. Thank you. Awesome demo and thanks everyone for listening. It was a pleasure being here. Thank you, Ronan. Thank you, everyone. Thank you. I'm also a fan of Argo so it was a great demo to see. Okay guys, we have a 50 minutes break so we'll see you at the next session. Thank you. Thank you very much. Bye, everyone.