 Welcome to another OpenShift Commons briefing, as we like to do on Monday. We bring in other projects, products, and have people give us an intro, tutorial, overview of what they're doing, and we've all probably heard of the Elk Stack or Elastic and all of those things, but I thought it was high time. We got an update on what was going on in the Elastic Cloud, and especially since they've now built an operator. So we have Peter Brokwitz, who is here with us from Vienna. That's why he's a little in the dark over there, but we're going to have him talk and give us an intro, tutorial to Elastic Cloud on Kubernetes, and we'll have some demo. And then there's time at the end for live Q&A. So wherever you are on Facebook, Twitch, YouTube, or here in BlueJeans, please just ask your questions in the chat, and we will relay them to Peter and have a conversation at the end. So welcome everybody, and Peter, go for it. Take it away, introduce yourself, and let's hear all about Elastic Cloud. Thanks, Diane, for the introduction. So my name is Peter. As you've heard, I'm a software engineer. I'm working for Elastic on what we call Elastic Cloud on Kubernetes. And it's also what I'm talking about today. Let me see if I can turn my slides. So Elastic Cloud on Kubernetes is a new way, the official way to run the Elastic Stack on Kubernetes. And maybe it's worth sort of taking a step back and reminding ourselves what we mean by Elastic Stack. So as you might know, at the heart of the Elastic Stack is Elastic Search as a distributed data store and search and analytics engine. Of course, we have a sort of window into that stack through Kibana, which is an extensible visualization and UI application. And that enables the three solutions we currently have, which is Enterprise Search, which gives teams the ability to unify sort of data from multiple data sources into a unified search experience or allows them to add a search box to their application that's powered by Elastic Search. And Elastic Observability, which is what we're looking at closer today, also during the demo later, which is caused the ability to sort of observe your infrastructure through blogs, metrics, application performance monitoring data, and uptime data and things like that and alert on it. And finally Elastic Security, which is a system solution and has also an endpoint component. So all of these have also small binaries or processes behind it, which is something to keep in mind because what we're interested in is actually orchestrating this later on on Kubernetes. So in order to get data into Elastic Search, we have these data shippers, Beads and Voxdash. Beads comes in many flavors, sort of metric beads for metric data, file beads for files and so forth. But now coming back to the question, how do you run this? Of course, the easiest way is to use our hosted SaaS offering called Elastic Cloud. It's the easiest way because all you have to do is say, I want to run this product and that product and tell us how many resources you want to use and we take care of the rest. Now it could be that for some one reason or another, regulatory or legal or some other requirement, you're not able to run on a public cloud and have the requirement to run on your own infrastructure. And this is where Elastic Cloud Enterprise comes in as an on-premise version of Elastic Cloud. And finally, and this is what we're focusing on today, Elastic Cloud on Kubernetes, which aims to give you a similar experience as Elastic Cloud Enterprise, but on Kubernetes. So today, Elastic Cloud on Kubernetes is just an operator, and a Kubernetes operator is available on Operator Hub I.O., and with the latest release, it's also going to be a Redhead certified OpenShift operator. Now why do we need an operator to run Elastic Search or Elastic Stack on Kubernetes? Well, you could say, can I not just create the necessary Kubernetes objects myself to run these applications? And the answer is yes, maybe for simple use cases. But as soon as we think about sort of beyond a simple proof-of-concept deployment, it becomes clear that sort of we need more power and more capabilities to run this. And let me illustrate this by going back to that slide we looked at just earlier, and looking at this slide again and think about what kind of applications are these and what requirements do they have in order to run them as an orchestrator. So which of these applications are stateful, and which of them are stateless, and then realize that Elastic Search, of course, sort of at the center of this is the stateful application that all the others are relying upon to persist their state. But this is also the most interesting application from an orchestrator point of view, and which is what I'm focusing on today mostly. So Kubernetes, of course, has support for stateful workloads in the form of stateful sets. But there's other things to take into consideration, right? So we need to start thinking about storage to persist the state. So we need to think about volume management. We need to think about choosing a storage provisioner that offers good enough performance for an application like Elastic Search. And then the other focus of this presentation, I guess, is we need to think about what happens after you've initially deployed this cluster. What if your requirements change? You need to scale up or down, or you want to change the architecture of your cluster because you have a new use case you want to incorporate. And then I think it becomes clear that we need to take into consideration that not all Elastic Search clusters are uniform. A simple cluster like this one that I illustrate here might be something you start out with. I don't know how much you know about Elastic Search, but in Elastic Search we have this concept of node roles that basically determine what Elastic Search does in the cluster master responsible for cluster state and the reaching consensus and cluster membership and things like that, data role to store data on the node, ingest to run ingest pipelines and ML to run ML jobs. And in a simple topology like this one, it's all co-located in the same physical node. They play all the roles together. But as your use of a cluster grows, what people start to do is sort of factoring that these out into different tiers. So then you have suddenly a cluster where not all nodes are uniform anymore, but they have different roles. So one thing people tend to do is sort of pull out the master nodes and a dedicated set of nodes just to isolate them from any spikes and traffic and make sure that your cluster stage stays available at all times. And then there are more advanced use cases, of course, where you then pull out every kind of node into a separate tier. And if you have a lot of time series data, it might even make sense to differentiate the data tier itself into what's called a hot, warm, cold architecture, where we move data that starts out on a so-called hot node, it's very powerful hardware and contains data that you search a lot. For example, log data of the last couple of days, you move it over time to cheaper hardware that's maybe a little bit slower, but as the data becomes less relevant for you, that's a good compromise to save on money and sort of slowly move the data. The main purpose of this is not to explain the different kinds of elastic search topologies or cluster architectures to you, just the one thing to take away of this is that not all elastic search nodes are uniform. And if you come back to that, what I've said about the need to manage Kubernetes resources and we think about how stateful sets work in Kubernetes and we realize that we cannot create such an architecture with a single stateful set, right? Because a stateful set has one top template, can create one uniform type of node. So if you want to implement an architecture like that, we need multiple stateful sets. And I mean, this is when you realize that managing this manually is a lot of work and this is where an operator has sort of unique capabilities. So just a quick reminder, I'm sure everyone knows how operator work, but just to remind ourselves sort of the main feature we are using here is at the extensibility in the Kubernetes API server through custom resource definitions that allow us to introduce our own types into the Kubernetes API. So on the right-hand side, you should see an example of how a elastic search cluster stack looks based on our CRD for it. So you see what I said earlier about different tiers of nodes. We call them node sets here that I have defined two sets of nodes, three master nodes, two data nodes. And the second pillar of an operator is then an actual process running in your Kubernetes cluster, of course, which has access to the Kubernetes API and watches as users make changes to these custom resources. As they create and spec out elastic search clusters, for example, it responds to these events and starts a process that we call reconciliation, where it sort of compares what the user has specified. I want to have a cluster with these and these nodes and tries to compare what it sees in the Kubernetes cluster at this time and tries to work towards that desired state a user has expressed. So it's maybe a little bit abstract. Let's take this example from before. So this elastic search cluster with two different sets of nodes, master nodes and data nodes, translates then under the hood into two stateful sets that they manage, one with three nodes, three master nodes, and one with two data nodes. But actually, that's not all. But there's a lot because there's a lot of sort of supporting actors here. There's config maps that be mounted into the pod for configuration files or secrets for TLS data or user and password information or the so-called key so which is the way in elastic search to specify sensitive configuration information that shouldn't be in a text file. So there's a lot more Kubernetes resources to manage and to create. And then we need to front all of this with Kubernetes services because elastic search is sort of a distributed data store is fronted by a restful API that we can expose through a Kubernetes service. So the operator does all that during reconciliation. It looks at the cluster. Do we have these stateful set that we expect to have if not create and do we have all the supporting Kubernetes objects if not create them? And we go beyond that as well by interacting with then the elastic search cluster that we create directly. And on the one hand, of course, that's in order to apply configuration settings. But on the other hand, we also, and we come back now to this idea of sort of day two operations where your cluster topology changes over time. We take special care when we enact these configuration changes. So consider, for example, a scale down. You maybe have over provisioned your cluster or you're migrating away from one topology to another. So you're scaling down one type of nodes and scaling up another type of nodes. So we take extra care that we migrate a data away from a node before we decommission it. And that's not to say that elastic search doesn't have its own recovery mechanisms to deal with failures. But it's rather that this is a planned configuration change. So we want to have a smooth transition and we don't want to rely on the failure or failover mechanism in elastic search. And similarly for rolling upgrades. So rolling upgrades is when we apply new configuration to all the nodes in the cluster. But we do this one node at a time. Again, we interact with elastic search here and basically tell it, if you will. Hold on a second, we're going to take down this node. No need to panic, no need to recover the data. It's coming back in 30 seconds with new configuration. So again, what we're doing here is making sure that the rollout of configuration changes and the transition from one topology to another is smooth and without interruption and without downtime for the users. Now I showed this very simple manifest earlier that you can use to spec out your cluster for advanced users. We also give a lot of power by exposing the POT template, which the stateful set, the underlying stateful set uses, directly through our custom resource definition. So this allows you to, for example, add additional metadata to elastic search nodes to use affinity node or POT affinity or anti-affinity. Or to tweak the JVM, so elastic search is a JVM based application. And you can sort of tweak it here with environment variables like in this example. Coming back to this idea that we're using stateful sets under the hood to orchestrate the different tiers of nodes we have in elastic search cluster. But not only means that every POT has a stable network identity, it also means that it has a stable association with a persistent volume and makes a lot of sense, of course, we're talking about a stateful application after all. But we need to give users the ability to have an influence on how these volumes are created. So this is why we're exposing in the manifest as well the volume claim template, where you can tweak, for example, how big the volume should be. In this case, it's example 100 gigabytes. And most importantly, which storage class should be used to create it? And that has, of course, a lot of impact on the performance of that volume. And you need to sort of make an educated decision based on the use case, what kind of storage class you use. So there's the network attached storage offerings from the cloud providers that provide decent performance for many use cases. And of course, for ultimate performance, you probably want to use local volumes, local NBME based volumes. So to summarize, Elastic Cloud and Kubernetes as it is now is an operator. Kubernetes operator allows you to deploy Elastic Search, Kibana, APM server, Beats and Enterprise Search on Kubernetes. I'm going to say on Kubernetes, I mean vanilla Kubernetes, OpenShift. And the hosted Kubernetes offerings from the major cloud providers. I'm going to show how the interaction model works in the demo. And we talked a little bit about already about the support of sort of moving from different topologies to another and rolling out changes and version upgrades. The operator itself is open code, so you can take a look on our GitHub. It's githubelastic slash cloud-on-kates, where you can see what we do under hood, where you can open issues, where you can open pull requests if you want to. And get in touch with us as well. We just released version 1.3 of Elastic Cloud and Kubernetes, which is going to be the first release, which is going to be a certified operator. And it has a new feature to allow volume expansion, which I'm going to demo. And we also have, for people who are invested in the Helm universe, we also offer the operator as a Helm chart now. And we have fixed some issues around IPv6. I think that's all I had in terms of slides. And I would now switch to the demo. Any questions so far? So far, you're hitting it out of the park, Peter. So keep going and we'll get the questions at the end. Sounds good. So I have a set up a dev environment here, an OpenShift dev environment, and there's currently no operator installed. So I can go now to operator hub and just search for Elastic. And you see the Elastic operator here. In a few, I hope very soon you will see the certified operator there as well. I'm opting to install this in all namespaces. And in a few moments, we should see the operator pop up in this view. This is how you know it's a live demo. Yeah, we need some elevator music there. I've taken the liberty to create a project ahead of time here. It's called Elastic Monitoring because we're focusing on the observability use case. So we see what I mentioned earlier about the custom resource definitions. So each of them is an API extension base. You use listed here as provided APIs. And we can now go ahead and maybe start creating an Elastic search cluster. We can do this. Just use a different, a different spec that I've prepared ahead of time, which has the latest version of Elastic search. And we're starting out, as I said, with a very simple topology. Just one group of nodes, three nodes, eight gigs of RAM. And I'm giving half of the RAM to the JDMS heat space, which is recommended if you have this kind of mixed role setup. And then the other thing we also covered in the presentation earlier is I'm speccing out a volume claim template here for 50 gigs. The last setting at the bottom is just because I haven't tweaked the kernel on this depth environments. I'm turning off some memory mapping feature in Elastic search, which on production environments gives you extra performance, but we don't need that for the demo. So I'm hitting create. And what we can see now is that it's the operator kicks into live and starts deploying this. And we can actually use the OpenShift console to see the stateful set. As I said, we have just one set of nodes uniform at this point. I'm back and we see the pods coming out. Now, the next thing you want to do is have a kibana in front of that so that we have actually something visual to see and I don't have to demo everything with with API. So for that, I'm going to switch to the command line. So that you can also see how the interaction works from command line to the fire up my editor here just actually let me deploy it first and then show you what I'm doing afterwards. Save some time. And now fire up my editor. So this is the second custom resource definition or the custom resource in this case. The first one is kibana. I give it a name. Use the same name space. Give it a version. I just want one instance. And this year is where the magic happens basically, we say, I want to automatically connect this to an elastic search cluster called elastic search, which is if you go back to the UI for a second, which is the elastic search cluster we created just moments ago. And what the operator then starts doing is sort of setting up certificates, setting up a user with minimal privileges for this and making sure that that association works between these two applications. The only other thing I'm doing here is I'm using the OpenShift service serving certificate feature to get a trusted TLS certificate for kibana and then I'm using this here in the spec later on. Now, in order to access kibana from outside the cluster, we also need an OpenShift graph. I've prepared that as well. So I'm using a Let's Encrypt certificate here and pointing the service that kibana has. I think I actually showed this briefly to you. So the exposing kibana and as well as elastic search to Kubernetes services, as I mentioned the presentation briefly, and the route I'm about to create is going to target exactly that. So let's go back to our UI. We can check if kibana is up. So kibana, as I mentioned, is basically a status application. So we're just using the deployment here to roll it out into the Kubernetes cluster. You click through to that, you see the kibana part is up and running. You can also monitor the health of these applications that we just have deployed, using the standard command line tooling. You can use abbreviations. You don't have to spell out elastic search to kibana. I'm using the abbreviation here. And we see that both reporting green health. So let me go now to the domain I'm using for this. Actually, this doesn't sound right, but yeah, that's more like it. I'm seeing a login window now. So how do we log in? And there's a built-in user called elastic that I'm going to use. And what we're doing is we're exposing the password for this user through a Kubernetes secret. It's called elastic search, so the name of the cluster as elastic user. And then I can just use a little bit of command line to pull that out. Something obstructing my view. Do it. Actually, there's a new line in that. So we need to use some built-in shell functions to trim that off. And I'm going, there's a macOS, there's a function to copy stuff from the command line into your pasteboard. I'm using that as well. And that should give me the password. For production environments, what you're probably going to do is not do that all the time. But instead configure some or some form of single sign-on instead. But for the demo, I think that's maybe too much for now. So what we see now is a Kibana. And I said we're going to look at observability a little bit today during this demo. And we see there is nothing there. There's no data in there. So how do we get data in? To remember what I said earlier about the Elastic Stack, then Beats is the way to go to ship data into Elastic Search. So how do you go about, let's say we want to start monitoring this Elastic Search cluster itself. What is a good starting point is our documentation page for Elastic Cloud and Kubernetes. It's on the elastic.co website slash guide. And then you just click through to Elastic Cloud and Kubernetes. And then we have a section for each individual application that we support. And as we're interested in Beats now, in the Beats chapter, there is a subsection with configuration examples. And that's a good starting point because these Beats configurations, it's usually a lot of YAML. So it's a good idea to start with one of these examples. So I'm going to use the stack monitoring example. So that we monitor the cluster we've just created itself. So what I've done is I've just taken that example. Let me apply it first, then show you what it does. So this is the third custom resource I'm introducing. We had Elastic Search in Kiba, this is Beats. The same principle, a name, namespace, a version. The type is metric Beats because we wanted to get the metrics from the Elastic Search cluster. And then again, this Elastic Search ref element that we've seen before, that automatically sets up the connection to Elastic Search. And then using metric Beats configuration to extract Elastic Search specific metrics. So that's a specific integration for Elastic Search. And there's multiple integrations in metric Beats for different kinds of common applications. And I'm targeting an Elastic Search cluster that has these labels on. Now, in order to have these labels on, I also, I'm sorry, rolling widely through that document. I also applied a configuration change to Elastic Search itself to add this metadata to the podcast. So this is the cluster from before. All I did was adding a little bit of metadata here, so that metric can target this specific cluster. And this is a good opportunity to take a look at how we roll out changes in Elastic Search clusters. So you see, it's terminating the second node because I've been talking so long, it's already updated the Elastic Search as default two node. So it's doing one node at a time, terminating it, waiting for 30 seconds to drain connections to the node, rolling out the change, booting it up and recreating the pod, and then waiting for the cluster to be healthy and then continues with the next node. So in a very safe way, it's rolling out this configuration change across the whole cluster. It takes a little while, but it makes sure that sort of we don't lose availability during that process. I'm not going to sit here and watch how this is rolling out. We are almost done anyway. Instead, I think I want to roll out two more things in that cluster. And that's, I don't want to only monitor my own Elastic Search cluster. I also want to monitor Kubernetes itself. So I'm going to roll out a theme and set for metric beat that allows us to monitor Kubernetes and file beat to extract logs from all running containers. Again, I'm using these examples from our documentation page. And let me first apply them so we save time. And then I'm explaining a little bit what they do while we wait for this to be rolled out. So let's set metric beat. So this is, by now familiar, the custom resource for beats type metric beat. Elastic search we discussed this, the new thing here is the Kibana ref, which allows your instructor, the operator to automatically connect this metric bit instance also with Kibana so that metric beat can install the dashboards into Kibana. And what this Yammer manifest is a slightly modified version from what we've done. So my colleague Michael is working on a blog post actually for the OpenShift blog, which will have the full manifests for you to download very soon. And what this does, it targets specifically OpenShift's control pane. So we have done a few tweaks here to account for the OpenShift namespace structure, for example, to extract metrics from the controller manager, the scheduler for core DNS and so forth. We're deploying this as a demon set. So in that custom resource, you can choose how to deploy metric beat. You can either deploy it as a demon set or as a deployment. In this case, we want to have this as a demon set, which you can see here. So, and similarly, file beat is basically the same idea. We've seen it now a couple of times, use a different type here, it's file beat, not metric beat, but otherwise very similar using Elastic search ref to automate the setup of the connection. And then we're using a feature in beats that's called auto discover, which is for Kubernetes environments where ports come and go so that it automatically picks up the logs from these containers as they are created or stops picking them up once they are deleted. And again, we're also deploying this as a demon set. I think that's all you need to know for now. Let's take a look at the Openshift console, what happened when I was talking and see if they have been the deployment has worked as it should. So we see the two demon sets one for file beat to extract the logs one for metric beat. They look okay, they look healthy. So let's maybe take now a look back at Kibana and see if data is starting to flow. This can take a minute or two. But we already see log data is coming in and metrics data is coming in. We can now, for example, zone in on the metrics and maybe slice and dice this a little bit differently by looking at Kubernetes ports for example. There's many parts so maybe group them by namespace looks better and maybe focus on our namespace here. What to refresh this. So we see the deployed parts and we can maybe zone in on to the logs for this elastic search cluster and we see we can jump through and see the log streaming and actually it can stream this life if I wanted to. And see new logs coming in. And similarly, we could, if I go back one second, we could also click through and get an overview of metrics. But this is sort of very generic. And as if you remember what we did in the beginning, I said we also installed a instance of metric beat to monitor the elastic search cluster directly and this is where we get a richer set of metrics. What we call stack monitoring for the elastic search cluster that we've deployed the individual nodes. And so, and finally, the last bit, we deployed metric beat to monitor Kubernetes itself, and it rolled out also a few dashboards that we should be able to see now. And indeed, we see six Kubernetes nodes which is accurate to the number of deployments we see data streaming in slowly just applied that and get stats about CPU usage memory usage network. And we can also look at specific dashboard for the Kubernetes controller manager. You can get metrics for the work queue and CPU again. Now, one last thing so we've seen just to summarize what we've done. We've deployed metric beat to monitor Kubernetes and the hosts in this cluster. We deployed Firebase to harvest all locks from all running containers, and I deployed stack monitoring to get a richer view for a specific application in this case elastic search itself. And we've seen how a rolling upgrade happened when I added additional metadata for the scraping of the metrics. So one thing, one last thing I want to show you is a new feature we added in version 1.3, which is in line on dynamic volume expansion, if you will. So imagine we've deployed this cluster with happy with this initial setup, but we realize we've under provisioned these nodes slightly with only 50 gigs of storage space and I want to fix that. How would you go about that? If you just using Kubernetes stateful sets, you would have to create a new stateful set because the stateful set and its specification of the persistent one game are immutable. So you cannot change the capacity once you've provisioned it. But we've added a feature to work around this until when there is a Kubernetes enhancement request to fix that in the stateful set controller. But until that lands, we've built in a workaround that allows you to directly go into this YAML spec as it's deployed. I'm just doing this in the OpenShift UI now and just change this to, I don't know, 100 gigs instead of 50. Save the resource and then let's maybe watch Kubernetes events. So what this does is it basically works around this limitation in stateful sets. It's going to directly edit the persistent volume claim and then recreate the stateful set on top of that to re-adopt these parts that have now been changed. You see the events flowing in that are already saying that the volume expansion succeeded here for this persistent volume claim. So if you go back to our stack monitoring view, we should see that Elasticsearch picks up this volume expansion without the need of a restart. So you see the first node, Elasticsearch ES default one that's the master node also has already picked up that change in the volume capacity has doubled without the need to recreate the nodes or re-create stateful set manually. I think that's all I wanted to show. It was a lot. I think it's time maybe for questions now if you have any. So there is one that someone's asked and it's about how to install Elastic plugins when using the operator. Yes, it's a good question. There's basically two ways, right? So one is that you use an init container to install the plugin before you start the main container, the main Elasticsearch container. That of course has the disadvantage that you're susceptible to any kind of network glitch because it has to download the plugin every time it boots or restarts the pod. So the alternative way that we recommend is, and this is actually also documented here on our documentation page, is to create a custom Docker container. And you're free to create your own Docker container based on the official images as long as they are based on the official images and basically install the plugin at container creation time. And that's the second way of doing it. So this here is showing how to use an init container. Basically you run a little shell script that runs the installer. And the alternative way is to use a custom image. We have a simple example here as well, like a very simple two line Docker image based on the official images to install it, which is probably recommended for more production ready scenarios. But that makes it easy, hopefully for everybody here and we'll add that link to the page into the video so that you can find this later. I had a question because I was reading through the what was in the latest release of Elasticsearch and specifically in Kibana. Is Kibana Lens part of the operator or is that a plugin or can you tell us a little bit about that because that looked like really cool visualizations that got added into the latest release. And I know. Yes. So, so what you get when you install the elastic stack for the operator is just the regular version of Kibana as you would get it if you downloaded it from our website. It's the basic license version. And yes, it comes also with lens in order to visualize. So if I create a new visualization here and click go to land so it's already included in the package and I can I don't know, start playing around with this, for example to to drag and drop some fields on the left hand side to create a graph here. In this case, it's now showing me the count of records for each different host them. It's not the most imaginative use of it, but you can you can come up of course with your own uses. It comes out of the box. So when you get the it's not a separate plug in this I think what I was no it's it's part of Kibana. Yeah, you get it right away. So other in in this latest release are there other like, I love this part because it makes it a bloody easy for me to create visualizations even though you pick probably the most boring one we could pick. But I think, are there other things that came in this latest release that you'd, you'd have like to point out, because there's a, I saw that I read the whole thing and I'm like, there's a ton in this latest release. So, one, I mean in the in the latest seven or 10 release, I think one of the more interesting features is sort of a formalization of the concept of data tears I think I spoke about this very briefly in the presentation so that you have these hot warm cold use cases where the time series data is moved to cheaper hardware over time. And the support for this has been formalized in the seven or 10 release and even extended in such a way that you can back. I maybe I need to sort of step back a little bit. As you know that data in elastic searches organized in indeed says, and they sort of are stacked up in snapshots. So on, you can now with the latest release you can search through these snapshots that are stored in very cheap storage that think S3 from Amazon or something like that. So that gives you another very cost effective way of organizing of building these multi tier architectures with elastic search clusters, where you move your data to cheaper and cheaper data tears as it becomes less relevant to you. Usually for log data you're only interested in the last week's logs right so it's then after a week then maybe you keep it around for the occasional root cause analysis and then you know after another week. It's typically maybe you're only keeping it around for regulatory audit purposes. And you rarely ever look at it so that's sort of this kind of use cases is made much easier with the latest release. That actually is like one of the wonderful use cases for sort of hybrid cloud and hybrid storage. I mean pick your lowest common nominator for cost and store your stuff where it should be. I think that's one of the sweet spots for cloud computing and taking advantage of it. That's a great thing. I was wondering too, since this is the community side of OpenShift, tell us a little bit about this is, this operator is available on operatorhub.io which is the community side as well as well as you've got a certified one. What are the differences between the community release of Elastic and this operator and the product side? Is there anything missing between the two of them or how are they differentiated if I'm just using the community pieces? So the community, so there is how to explain that. So the community operator is basically identical to the certified operator and both allow you to run in a free to use basic license, which is what we've seen today. And both the community as well as the certified operator or the other download options we have, if you download it from our website or you install it via Helm, it doesn't matter. All of these versions of the operator allow you to install a license into it. And then you get all the commercial features in the Elastic stack, machine learning, certain aspects of the observability features, the security features and so forth. So, no matter where you start, there's always a way to use these commercial features if you need them and to upgrade and install a license into that. So, there's no strong difference and you're not locked into one or the other if you start with the community operator or any other option of downloading it. That makes sense. And that's actually nice to have that easy on-ramp to getting to the pro version or the product version from the community is quite nice. Because I know where I've heard the most about Elastic recently being inside of Red Hat is around Open Data Hub and that it's Elastic is one of the components in the reference architecture that Open Data Hub was. And I think they were using the community side initially and, well, I'm looking to see whether we can, whether they're using the operator. I haven't talked to the operator hub team in the recent weeks or months. So it might be time to get them back on too to see, you know, and get their feedback. How is the, so how long has this operator been available? Because I want to ask the question, what's the feedback you've gotten from customers on using the operator? But if this has just been a few weeks here, it might not be. So we've had, you know, you're asking me, my memory shows me, you know, I think we had the GA, what we call the GA version of the operator, the 1.0 release, I think it was a January 2020. I hope I'm right. Yeah, she was saying January 2020. So you hear memories correctly. Yeah, but we had a beta version before that it goes, I think, back almost a year or so. I don't remember when we initially released it. So we had, I think, quite some time, quite some exposure to users. And I think the feedback has been overwhelmingly positive. I think people are very happy with how it works, how it, how the orchestration works, the automation we offer make it easier to deploy complex topologies on Kubernetes. So I think we're very happy with how this has turned out and how people have responded to it. So, and you had said prior to us getting started that you were part of the team that built the operator. And what was that process like for elastic, not just the certification stuff but like the actual was, what advice would you give to someone now, if they were starting down their operator journey, like, besides read the documentation. And it was, I think for for many of us it was the first time writing any, any operator in Kubernetes so it was I think a steep learning curve, of course, you get familiar with the frameworks making a choice, you know there's different sort of operator frameworks out around. Eventually we settled on controller runtime and cute builder initially, even though we have now replaced parts of what could be the automates away with sort of manual things we do. And I think it's really, I mean, my, my colleague Sebastian has given a quite good talk at the, I think the last year's coupon about pitfalls of writing your own operator which summarizes our experience and sort of our journey to understanding just to give a few examples. I think one thing we needed to be realized. During that process is the way you interact with the Kubernetes API is usually through cash clients so there is no direct, you know, no direct request to the Kubernetes API server and that of course is a measure they've taken in order to reduce load on the actual server. So instead you have like cash locally in the application, and that cash is synchronized from the API server but that means, whatever you do, whatever you, you, you, you whatever interaction you have with the Kubernetes API sort of always a little bit behind the actual state of Kubernetes. So when you then build your operator you sort of need to factor that into the algorithms you you build in order to orchestrate elastic search because you have to realize okay I see this pot now. But in reality that pot might already have been deleted or maybe you don't see a pod that you expect it to see. But that's maybe also a function of sort of this caching delay that comes through the credit is client. So this is just one example of things we sort of discovered during that sort of development time. And that we that we slowly sort of factor into whenever we build a new controller and support a new application or a new stack application on the operator. I think by now we are sort of aware of these pitfalls but initially of course I was sort of something to work around. So it seems like designing your custom resource definition, you know, we went through multiple iterations of, of trying to abstract away many, many things, and giving the users sort of a very high level toggle so to turn things on and off and then, as I showed you during the presentation we gradually moved back from that to a place where we expose actually a lot of the underlying abstractions directly in the CD to give the user more power and more flexibility, and also to reduce cognitive overhead as we introduce concepts in the custom resource design, every user have to learn this understand this. And so we eventually opted for removing a lot of that abstraction and sort of exposing also elastic search configuration as directly as possible in the CRD because many users of the operator are familiar with elastic search and they know what to put into the elastic search configuration file. So we wanted to give them a similar experience here in Kubernetes where they could just apply what they already know and just plug that into that manifest in that custom resource manifest as it as it were on on premise installation. And someone's just added in the writing a Kubernetes operator the hard parts that Sebastian wrote did. We'll, we'll, we'll put that in the notes to so linking in the YouTube video that I post later from the session. So we're almost to the end of the hour here, wondering, it's sort of an a closing thing, because me, I'm totally focused on helping people build their operators. So, and whether it's for elastic or tremolo securities, open unison, but really hearing people's feedback on the operator, the operator framework and things so thank you for that. I'm wondering if there's anything coming in the elastic roadmap, a new feature or functionality that you we should keep an eye out for in the upcoming releases or something. What are you working on now that's going to make it even. So I can't speak for sort of the overall elastic roadmap know what we're working on for the operator. That's maybe something I can talk about a little bit. So, of course, we are trying to bring more stack applications onto the operator so there's a new thing coming out which is called the elastic agent which is a way of bundling together. These different beats into one binary. So instead of what I what you saw during the demo that I had to deploy metric be separately from file beat and there are other beats as well. So in the future, elastic agent will allow you to deploy one binary and then the different sort of harvesting capabilities. It's just configuration of the agent. So that's something we're looking into supporting. Eventually, we also looking into sort of some form of auto scaling capabilities for the operator. And elastic cloud on Kubernetes is also a sort of know I referenced elastic enterprise as the on premise product which has a UI which has API is in sort of in the far future we want to sort of close the gap here between these two products and give you a similar experience as well with a UI maybe and some UI APIs. So we did get one more question from the live stream. This is how would I handle selection of against specific nodes or in more detail to deploy elastic search operator on a dedicated info nodes is it preferred to configure the tolerations for teams on the deployment config of elk or do it on the operator config itself. That's a complicated. So I'm, I'm, yeah, I think I'm reading this as a question how to. So the operator itself of course so if you install it via operator have it's all managed by the operator lifecycle manager. So the namespace it lands into is sort of predetermined I think that's not the question I think the person is asking about how to influence where the elastic search nodes land on and so that. And there you can you can use note selectors you can use any feature that sort of Kubernetes offers you to target that and you would do that as I think I showed this briefly I don't have the slides up through the through the pot template. I'm just going back here. For example, through no infinity. Or you can use a note selector. So everything that Kubernetes offers you to target a specific note or exclude notes. You can use here because we're directly exposing the pot template here. Perfect. I hope that answered their question. I think it might have so that's good. If you could, if you have a final slide. If you could pop over to that so people know where to find elastic. That's not hard elastic.co where to find you guys. That's probably a good one to end on and we'll. I only have a slide without GitHub repository on but I think from if you go there you. The website is linked as well. I hope that works too. That works fine. I'm going to make you send me your slides so I can attach that with the YouTube video as well. And I think we are pretty much at the end of our hour. I'm going to pause for a few seconds and see if anyone else pops in with another question. Otherwise, I'm going to let you go back to your evening because you're in Vienna and I'll grab another coffee and make sure I get this wonderful walk through of the elastic world view of things up on YouTube for everybody later today. And we'll definitely have you back with the next release and I'm going to have to go. I'm now I'm really going to have to go play with the cabana lens. Because now you've made it dead easy and I have no excuse to not to not make beautiful visuals even though you only showed us the bar. I saw some demos earlier with like just beautiful mapping and also interesting really interesting stuff. So there's there's a whole whole world of visualization that you've opened up and made much easier for folks. So I totally appreciate that and we'll have to work out with the open data hub team to them getting some nice visualizations added in which I've seen a few. But this I think this really is going to expand upon all this and you've made it dead easy for people to deploy it now on open shift and elsewhere on Kubernetes. This is kudos to you and the whole team over at Elastic. Thank you so much for taking the time today. Thank you. Thanks so much for having me. All right. Take care. Peter and everybody else stay safe and have a great week. Thank you. Bye now.