 Hi there, thanks for watching this video. My name is Mikael Morello. I am a software engineer at Elastic, and I am working on the Elastic Cloud on Kubernetes operator. I am really excited and happy to have this opportunity to make you a short presentation and also a short demo for Kubernetes operator or for observability solution. But most importantly for me, to show you the last really nice feature of the operator, which is the ability to automatically scale Elastic such clusters. It is a very exciting feature released as an alpha feature in the version 1.5 of the operator. But before, I would like to take just a few seconds to briefly present what Elastic Such is and all different solutions. At the heart of the Elastic Stack, there is Elastic Such, which is a distributed data store and search and analytic engine. As a developer, you can interact with Elastic Such through its API, but you might also want to deploy Kibana on top of it. Kibana is a data visualization dashboard for Elastic Such. I will show you some of its features in a moment. These two fundamental components enable the three solutions, which are Enterprise Such, which allows you to implement a powerful such experience on your data and in your applications. The Elastic Observability solution, which is what we will focus on today. And finally, Elastic Security, which is a security information and event management system. All those three solutions are powered by Elastic Such and Kibana. To fit them with some data, you can deploy some data shippers. Today, I will show you how to deploy bits on Kubernetes, more specifically, file bit to ship your application logs and metric bit to ship metrics. Running all these products is not that straightforward, mostly because we are talking about managing stateful workloads, but we are also here to help. You can either use Elastic Cloud, which is our start service, or deploy them on-premise, using either ECE, which is basically the software which is running our cloud services, or you can also use the operator, which is what I will show you right now in the demo. So the first thing that I want to do is to deploy the operator on my Kubernetes cluster. There are a couple of ways to do that. The easiest one is to use all-in-one manifest. I will show you how to do that in a moment. If you are a more advanced user and if you need more flexibility, you can also have a look at our new MShot. Also, the operator is available on the operator hub. And if you are a Red Hat OpenShift user, well, the Elastic Cloud on Kubernetes operator is a Red Hat OpenShift certified operator and is already available on the Red Hat OpenShift console. But now, as I said, I just want to use the all-in-one manifest. So using the all-in-one manifest to deploy the operator, it's just a matter of applying this YAML file on my cluster. It was good. Let's see if the operator is running. Yeah, looks great. Maybe let's also have a look at the custom resource definitions which I've been installed. So as you can see here, we have almost one custom resource definition for each of the application of the Elastic Stack. But what I would like to do right now is to focus on Elastic Such. So let's have a look at the Elastic Such manifest example. So this is what an Elastic Such resource looks like. It lets you define the topology of the Elastic Such cluster you want to deploy on your Kubernetes cluster. So in my case, in this example, it is kind of a simple topology. I have a first node set with actually just one node which is a dedicated master node. And I also have a second set of nodes, actually two nodes in this data set which are some almost dedicated data nodes. And for Zeus data nodes, I want to use some 10 gigabytes, a system-volumed using the default storage class. So let's apply this manifest. What I would like to do also is to deploy right now five-bit and meter-bit just to save some time. But I will get back to these examples in a moment. Okay, sounds good. So let's maybe just have a look Yeah, okay. So what is happening right now beyond the scenes is that the operator is creating all the Kubernetes resources required to run an Elastic Such cluster on Kubernetes. So for example, we need some secrets. We need some network services. And obviously if we need some pods, so let's have a look at the pod for this cluster. So they are almost running. So here you can see my dedicated master node and EMI two data nodes. In the meantime, maybe let's just have a look at a metric bit resource. So this is metric bit. So as you can see here, all I have to do in order to define to which Elastic Such cluster I want to send my data, is just to add this Elastic Such ref field. And what the operator does behind the scene is that he's creating all the required configuration in order to establish a secured connection between all these instances of metric bit and my Elastic Such cluster. In the context of bit, I can also do the same for Kibana. It allows a bit to set up and install some whole dashboard. And as you can see, there are a lot of YAML here. This is because the bit resources give you a lot of flexibility about what kind of matrix you want to collect and how you want to collect. It also allows you to define how you want to deploy the pod. So in my case, I want to use a demon set to have one instance of each of my bits on each of my Kubernetes nodes. Okay, so let's have another look at my resources. For example, Kibana and bits. Okay, looks good. Everything seems to be deployed in green. So let's try to log in to Kibana. Okay, so in order to log in to Kibana, I will use a default user named Elastic. This is an Elastic generated automatically by the operator. It allows you to log in first time into Kibana. Oops, maybe I made a typo. Okay, looks good. So maybe let's try to see if I already have some logs. Okay, I've just deployed five bits. So this is kind of expected. Let's see if we have some matrix or not yet. Let's try to refresh to see if I have some logs. Oh, okay, it's better. I have some logs. So let's see, for example, if I can filter on my Elastic such pods logs, for example. Yeah, that's good. So as you can see here, I'm now able to search into the logs produced by all my pods on my Kubernetes cluster. So let's try to see again if I have some matrix. Yes, so as you can see, it is a six nodes Kubernetes cluster. So what we are seeing here is a CPU usage for each of my Kubernetes nodes. So maybe let's try to have a look at the pods matrix. So let's keep CPU usage. I just want to group by name space because I have a lot of pods. Kibkan demo, let's say, for example, this one. Okay, let's maybe change the time window. Yes, okay, so as you can see here, I can, it is very easy for me to have a look at the CPU usage, the memory and the network traffic for each of my pods. What I can also check if, as I said, a metric bit automatically set up and install some sample dashboards. And for example, this one should give me a more specific overview of my Kubernetes cluster. Okay, great. So now as you can see, I have a lot of data being ingested on my Kubernetes, on my Elastic such cluster. And what I would like to show you is all to scale automatically my Elastic such cluster to adjust the storage capacity. So let's get back to my Elastic such resource. And I just want to show you how to enable auto scaling on an Elastic such cluster. So in order to enable auto scaling, all you have to do is to declare what we call an auto scaling policy. An auto scaling policy will let you describe the area of the auto scaling controller on a set of nodes. So in this example, I just want to control this, the nodes which have this set of rules. And what I want to define here is that I want to have at a minimum two pods. And I just want for the operator to be able to create three additional nodes, so up to five nodes to adjust the storage capacity for my cluster. If you have a storage class which supports volume expansion, you can also define a different min and different max for the storage capacity. In that case, the operator will automatically scale vertically first the storage capacity and then I add some nodes as a required. So auto scaling is an enterprise feature. So what I have to do is I would just set up a trial license. And what I want to do also is to deploy some pods to generate a lot of logs and a lot of data on my cluster in order to speed up a bit the demo. Okay, looks good. So now what we can do is that we can go back to Kibana. So for the purpose of this demo, I've set up a sample dashboard to let you visualize the remaining disk space capacity in the volumes of my two initial nodes. So it's not as a persistent volume of 10 gigabytes. So maybe let's also adjust the time window to have a better overview of what will happen. Since I wanted to show you a realistic demo, it will take a few hours to fill up the volumes. So the rest of this demo will be recorded and I will speed up a bit the video. So let's start. Okay, there is almost one gigabyte of storage space remaining and as you can see as the operator has automatically added a new node to the cluster, to my Elasticsearch cluster and the data are automatically rebalanced across the nodes. If we wait a bit more, then the nodes and all these automatically added and then again, the last one. So let's have a look at the pods and as you can see here, you can see the two initial pods and then the second one, which has been added a few hours later and then the two other ones. So you can check the auto scanning status by having a look at this annotation, this one stored in the Elasticsearch resource. And as you can see the amount of data managed by my cluster at the end of the demo would still require another node. But the operator will not go beyond the five node limits I set in the auto scanning policy. Also the same information is reported through some communities event. Okay, so I hope that you have enjoyed this presentation and the demo and if it's not already done, just come visit us at the Elastic booth. It is a great opportunity to chat, to meet our experts and ask your questions and more. So see you there, bye.