 Hello everyone, as Katie mentioned, today we're going to be explaining InfluxDB together with telegraph operator and how to use them to monitor Kubernetes workloads. We're going to be showing some examples. A lot of these are based on what's in telegraph operator repositories. Maybe we'll start by introducing myself. So my name is Wojciech Kotian. I'm a software engineer at Influx data. I am one of the people that contributes to telegraph operator from our company. And I'm going to be the one showing it, showing telegraph operator today. Cool. So I'll go ahead and introduce myself. So I'm Pat Gon. I'm an engineering manager here at Influx data. I manage the deployments team. And we are responsible for all the plumbing that is in place, this whole CI CD pipeline for getting for our cloud 2 SaaS offerings. So that's what kind of focus our talk in the space that we know, which is Kubernetes and Influx data. So first I just really want to say, so Influx data is the remote first company behind InfluxDB. So I think most of you all know probably more than I do about how to use our product, but I'll give a little bit of an overview and don't hesitate to ask questions along the way and we'll get to them at the end. So InfluxDB is the platform for building time series applications. And I wrote all these really good words and now I'm having to read them. And really at the heart of it, it's an open source time series data. So it's purpose optimized for time series data, whether that is sensors or like I have one of those like doorbells where you can see the person. So it's like there's time data there. So wherever there's time-based data, InfluxDB is a perfect platform for you to develop applications around that data. So you can start from the UI or you can skip right to it and you can use the raw code and the APIs. And we've got APIs and client libraries in several of the most popular programming languages. So Telegraph, if you're not already familiar with Telegraph, if you have say that ring device over there and you want to get your data somewhere, Telegraph has the input and output plugins to allow you to get your data from your device into a database. Of course, my preference is if you put that into InfluxDB, but we've got plugins for other types of things. It's an open source agent and I think has a really healthy open source community and it's maintained by InfluxDB. And they have like, they have a lot of different plugins. There's over 300 different plugins that allow you to basically, like I said, like manipulate your data on the way in, get your data in and help you manipulate your data on the way out. It's really a really powerful tool. So today we're going to focus on talking about it in this space of Kubernetes, but I know that there were several talks, I think, at Influx Days North America, where they actually talked about Telegraph. I think there was a beginner session and I think some other things. So check it out because it's really powerful tool. So now I'm going to tell you a little bit about the Telegraph operator and I wrote some notes ahead of time to prepare for this. So the Telegraph operator packages the operational aspects for deploying a Telegraph agent on Kubernetes. So this is about having a Kubernetes sidecar with a Telegraph operator there. It's a sidecar container based on annotations and it provides the Telegraph configuration to scrape the exposed metrics all defined declaratively. It allows you to define common output destinations for all your metrics. So you can send it to Influx DB or you can also send it elsewhere. And I'm going to pause there because I want to let Wojciech finish setting the stage for his demo and I don't want to take it all. So actually Wojciech is going to take it from here and do a demo. But I'll let you also finish. Right, so thank you Pat. So as you mentioned, Telegraph operator is meant to be running alongside Kubernetes workloads and this is what we're going to focus on today. Let me, could you stop sharing so I could start sharing on my end? Yes, but I have to, Wojciech first had to show this slide. Okay, done. Now I can stop sharing. Okay, yes. So I noticed that there is a question about APIs for Influx DB. So I'll just share this real brief and keep it open. So we have a documentation about all of the APIs. There are also clients and I'll show it in a bit. So it's well documented. These are rest APIs. There is a querying language called Flux and InfluxQL that could be used to get the data and writing the data is relatively simple. But going back to Telegraph operator, what I'm going to do is I have checked out a copy of the source code of Telegraph operator and it includes a lot of ways to run it using kind. Kind is Kubernetes in Docker, which is a way to run a whole Kubernetes cluster locally just using Docker. And that's what we use for a lot of testing of Telegraph operator because it is a simple way to run things and also be able to do fancy things like building a custom build of the container image with Telegraph operator and say loading it, which is not something we will be doing today, but it's also really useful in development. And we'll just use the exact same setup that we use when we develop it. So what I did in advance because it takes around one to minutes, I run a kind start, make a kind start command, which basically just creates a kind cluster on my computer and it's, it deploys a few things, but we're going to deploy we're going to deploy InfluxDB version tool because that's what we want to demo. So I just deployed it. This is the open source version of it. And as soon as it, let me just go forward to it. Okay. So let me see what's happening in my kind cluster. Okay. So my InfluxDB version tool is running right now. So I can do this. So right now I just, I have, I created a fresh Kubernetes cluster on my machine. I deployed InfluxDB version tool to it, the open source version. And I'm just going to bootstrap our cluster, meaning that this is the same if I would deploy locally, but I just want to have everything in my cluster. So when I first deploy it, I'm going to set up an organization and everything for the InfluxDB itself because this is where we want to get the metrics in. And also touching on the question of how to get the data in. We also have a UI that shows away how to write data from a lot of places. So say if you're a Golang developer, it'll give you a ready to use snippets. Obviously you would want to replace the token and some other things with flux over time and ramatize this. But this is a really good way to get started with just putting data in Influx. But anyway, right now what I really want to do is in order to be able to write to my organization, I need this token. So right now we're just going to grab this and then we can get back to explaining and configuring telegraph operator. So I haven't deployed the operator yet because there's one additional thing I want to do. So telegraph operator has a concept of classes, which are classes of applications or classes of metrics that you're gathering. And this basically maps to specific sets of telegraph configurations. And one of the things that we should be setting in here is how would my application write the data to wherever I want this. Because telegraph operator is meant to be generic, so we should be pointing it at specific outputs, which is part of standard telegraph configuration. So what I'm going to do right now is I'm going to tell it, okay, let's just also write it to my Influx DB2. And now what I'm telling it is in my cluster, there is an Influx DB2 service, which is what we were just talking to in the browser, in the Influx DB2 namespace, and the port it listens on is 8086, which is the default port. I'm just going to tell it my organization is demo as I just entered it, bucket is demo. And now I'm just going to copy my token and because it's in my local machine, I'm sharing it because I'll just leave the cluster later on. I will also copy it to, I believe we will be using the default one as well, so I'll just copy this. So right now I'm configuring the classes, meaning that when we want to monitor some workloads, we will need to specify what the class of that workflow is or it will be using the default class if it's not specified. So I have created app default and I believe infra, I think we will not be using all of them, but all of them also specify that the data should be going to the new Influx DB2 that I've just created. This is a standard. Yes. You're missing your equal sign on token. Oh, thank you so much. Yes. Okay. Yes. That would be a painful demo. It didn't just bother me. It bothered someone else too. I was watching it. Thank you. Okay. Thanks you so much for noticing it. So right now I'm going to go back to my terminal and I'm just going to deploy it as examples classes. Okay. So the example is already committed and the example shows how to use it with Influx DBV1 because from the development perspective, we keep on using the version one for that, which is something we should improve, but it's just, it's just, Jelegraph operator has been created when the V1 was there, was that possible? So right now I deployed my class, my configuration, I can update it in the future and there is live reload. So I can change it, but right now we deploy that. So what I'm going to do next is I'm going to deploy a Telegraph operator and it can be deployed in multiple ways. We have the dev.yaml file, which basically is meant to be used for a local development, but because I'm doing this in kind, I'm just going to reuse it because it also has hard coded certificates, so it's not really production ready, but it's enough working kind. We also have a Helen chart. Okay. So right. Now it's in GitHub. Telegraph operator. Yes, this is what I was looking for. So we have a Telegraph operator Helen chart as well that's available if you just install out of our infox data Helen chart source and then and then you can install it or just use upgrade install, which will either install it or upgrade it depending on whether it's already installed or not. And this is also this is a this is a preferred way to get to getting production environments running, but because we're using kind and because all of the examples are based on this, I'm just going to follow this and not to the hand chart base installation, but I can right now I can just go and see what's running in my cluster so you can see that Telegraph operator is spanning. It's ready to handle to handle the new deployments coming in and adding the Telegraph site guys. So now the way Telegraph operator works is maybe I'll just open one of the deployments to explain it. This is just a definition of very simple definition of how to run Redis. It is a stateful set, but it doesn't really include volumes in real life. This would be a more complex stateful set, but this is an example of how to use Telegraph operator to monitor things. The way Telegraph operator works is it for each port that gets created, it checks the annotations and if there is a Telegraph operator annotation in it, it will inject the site card. So right now we can see there is just a one container called Redis that's just using the default Redis image, or we can also see that we have the annotation telling Telegraph operator that it should be contacting local halls and the standard Redis port and using the Redis plugin. This is one of the plugins that Pat mentioned, and maybe I'll explain this a bit more. So actually it's a voice check before you go into that. I was hoping because I don't think we actually showed people the repo. Oh, that's a really good point. Telegraph operator. This is all code you guys can get to. Yes, Telegraph operator is open source and it also includes an extensive freedmere how to get started with development, with deploying points to the helm chart. So if you want to rerun what I'm showing today, I think the easiest way is to clone it and I'm basically using a lot of the make targets and just applying some of the things that we also mentioned in the documentation because you can see that we're deploying this, we're just applying this through GitHub URLs rather than locally. But yes, the repository is on GitHub. It is open source. You can clone it. You can run the same. Yeah, and you're working within it right now. I just realized we didn't like. No, thank you so much for this. That is a very good point because I am because I am so into the repository sometimes to get to explain things that may like maybe my day to day things, but for a lot of people, they may be new. So it's good to mention it. So going back to the configurations, I may have skipped explaining some of these things. So the way Telegraph operator works, it combines the output, the Telegraph configuration that the Telegraph will be reading from multiple sources. One of the sources is the classes that I mentioned, which is just a vanilla community secret with the definitions of all the classes. And usually this would be including outputs or some of the tags or some of the general things that would be applied to all the metrics related to this. It's called this class of applications that want to be monitoring. So in this case, we added the output to it, which means that everything with the app class would be writing to our inflex DBV2. We also make it output to standard out to the debug. And we have it use global tags that will show in a while in the UI. Basically, type is set to app and then host name and node name would be the name of the host and the nodes that the Telegraph is running on. And now if we take a look at the readiness deployment, we are adding some other pieces of Telegraph configuration. So one of this is writing inputs dot readiness, which means use the readiness input plug-in and previously we were using the inflex DBV2 output plug-in. So we're telling Telegraph, talk to readiness on this part, get some of its standard metrics and send them out to inflex DBV2 on this specific URL. We could also tell it like send it to my cloud instance, send it to some of the on-prem instance of inflex DB, or maybe send it to one of the very, very large set of output plug-ins to the whisper link. It could be sending it directly to Kafka or some other output plug-in to this part or write it to a file. But basically, we tell it this is the input. These are the outputs that are in the secret in the classes and then they get concatenated. So my readiness definition tells this is how you should gather metrics for my readiness. My classes tell it this is where you should be writing this and it also tells it, by the way, this is the app class, meaning that whatever I put in my app class in the classes definition is where the data goes. We can also specify the setting for memory requests and limits for the Telegraph sidecar. This one is invalid and it will be ignored this small of a development test case, but the limit, the CPU limit will be set to the Telegraph sidecar. So anyway, that's that and let's just go ahead and deploy this. So this was examples ready, do you believe? Yes, examples ready. Okay, so now if we go back to watch, we can see that we only specified one container within the pod spec. We can see it's actually running two containers. If we do the spec pod, if we go ahead and describe it, let me just do it this way, we'll see that there is the readiness container we defined. There's also the Telegraph container that was injected by the Telegraph operator and we can see that the CPU limit is set to 750 millis, so 0.75 of a single CPU core. We can see it's mounting at the Telegraph using a secret that was generated by Telegraph operator. So basically, when the pod was about to be created, Telegraph operator combined the whole Telegraph configuration, put it in that secret and started running Telegraph operator and it also tells the Telegraph operator that it should be monitoring that configuration to allow hot reloading, which I'll explain in a bit because that is an interesting feature of Telegraph operator. But anyway, at that point, I believe the pods are already running. So what we could also do is right now, I'm just telling, I'm asking for actually, let's use something more visual. We're going to run a tool called K9S, which is a nice console-based UI for a lot of things Kubernetes related. And it's much better than what I was doing before that. So I think that's going to be more visible now. So this is my pod with the sidecar included. I can take a look at the logs of this Telegraph sidecar. And I can see that because we told it to also look all the metrics to standard out, we can see that we already have the metrics in here. And the metrics are in line protocol, which is what influx to be built on top of. But this is basically just because we told Telegraph to write the standard output and we didn't tell it to use any other protocol. So it's just writing it in line protocol. But going back to this, we can see the mining. So now I can go back to my influx to be. And I can see that I have a lot of my registry data. So I could see, actually, I didn't even know what to look at. Let's say Max clients. And I see 10 K that my Max clients is from 10 K. I could probably also see a lot of other metrics, but because there's not really nothing happening. I can also just have it show all the metrics. So we see some metrics changed over time. But not a lot of them. So we can see that there's a lot of metrics that I can also show the raw data. And we can see that there's a lot of data that we have. And we can see that the Telegraph operator is reporting this. So let's say, okay, let's try to do something more practical. I would want to monitor how much memory is being used. So I already have it. And for any other workload that that I will be deploying in my cluster, Telegraph operator will will automatically be injecting that. But we can also see that the type is set to up. And this right now, in fact, the BUI works in this way that I'm right now, I can build out a query using just the UI. And I can filter by all of the tags that we're setting in the type equals up was set when we were creating the Redis deployment. So I can with this, I can what would only be so let's say, let's go back, let's remove this a bit. I could say that I will start by just filtering data coming from my applications. And then I can go back and say, okay. And now let's take a look at all the fields they have, right? So we, for example, we have another, another thing that we could deploy, which is, which is an example of deploying NGINX. In this case, it's also an interesting example, because previously with Redis, we had, we were specifying the raw Telegraph configuration, but we could all, but if the application is already exposing metrics in the, in the Prometheus format. So if you have a, so if, if you, if your application already exposes metrics using the Prometheus standard, you can say, go, scrape Prometheus metrics on these ports or on one port, I could just say, just scrape port 8080. And then this is the path to go to scrape every five seconds and the protocol is HTTP. And the last annotation we have here is also scrape the internal, get the internal Telegraph metrics. So once I deploy that, apply examples, email set, this will deploy, this will deploy the NGINX daemon. You can see it's being deployed. We can see it's slowly running. And if I go to the logs, right, it's mentioning that it can't really scrape the logs because NGINX is not listening on those ports. And also our NGINX is not running any application that will expose the metrics. Or we can see, because we also enabled the internal metrics, we can see some basic metrics that the Telegraph is reporting. So right now, if you go back, we can see the internal data in here, we can see gather errors, we can see a lot of other data in here that's slowly being gathered. And based on that, we should be able to build a lot of dashboards out. So let me just show a quick example of that while this is not exactly what this is not exactly Telegraph operator specific, but let's just show how I could basically just go and say, okay, I just want to see, I use the memory for babies, right? And then I can just save it and I have my dashboard. And that would be an easy way to just move from having my workload in the cluster to basically being able to visualize it in FluxDB. So we can see that the metrics, if I go back to the Telegraph plugin, sort of to the logs of the Telegraph, we can see the data keeps on coming in. So one other thing that I wanted to mention or show is just really interesting is, as I mentioned, we also support reloading of configuration. So I could just adding a new tag, let's say new type equals application. And for the other one, we could say default, new default. Okay. So once I deploy this classes, so the only thing I'm deploying right now is I am changing a secret that Telegraph operator is using. But if we take a look at the logs, and this should take around one minute for Telegraph operator to notice this. And we want logs from all the time. In around one minute, because this is how much it takes to bring this to reload the secret mounted inside the container. In around one minute, Telegraph operator will pick it up and will say, okay, I know I see that the configuration has changed. So it's going to reload it, but it's also going to check whether the Telegraph site because they've created what are the secrets they've created and go in and update them as well. And we should start seeing the new data in a few minutes. This is really useful. And this is something we use a lot as influx data. And we started using the pod reload as well recently. And that was one of the things we really, really wanted. Because whenever any configuration changes, we don't really want to restart the whole workload of trying to manually restart the Telegraph operators. What we would really want is, and we have it right now, is the ability that once we change the settings, Telegraph operators would be smart enough to detect that and then decide which are the things that really need to be updated. So we can see that it decided that we don't really need to update the secret for the nginx. We can see that it decided let's not update the secret for nginx, demon, and hx2, because nothing changed in there, because we didn't change the basic class. But let's update the secret for redis, because the class in there was up. So if I go back, and this is a mistake I've made, if I go back and also add this class, and then tags, basic, if I do that, then we should be around one minute, we should see we should see it reload again. And then in a while, we should all the logs, we should see another load message saying that's updated. But right now I can go back, and if I try to filter on, I'll call it type, I'll call it one second. Okay, call it back, yes. So maybe the change wasn't yet, it wasn't reloaded yet on the redis level, we should take a look at that in a bit. So Floyd, just to summarize what you're doing, you're basically, you're now like kind of, you've got the telegraph operator in your local kind cluster, and now you're adding more and more things for it to monitor. Right, so maybe that's a good point. So I'll try to summarize what I've been doing, and what's happening, what I'm trying to show right now. So we deployed, we had a company that did not have any workloads in it, which would often be the case, or maybe we would have some workloads. And then what we did is we deployed telegraph operator, which would start injecting the telegraph site cards to any pods that were, any new pods that were created. So for any new workloads or any workloads that would have the new annotations added, the telegraph site cards would be injected to those. And because changing the annotations on the pod would have, would mean that the pod gets recreated. So whenever we would be adding the annotations, then the new pods would get created, and though they would start getting the telegraph site cards included. One other thing that I tried to show, and maybe I should have done a better job explaining these things is, so that would be day one of operations. You would deploy telegraph operator, it would add annotations to all your workloads, and it would start seeing the data inside your inflexdb or any other, or any other place where you would be loading the data to. But as you move into day two of operations, sometimes you need to change some of the settings, and this is an important aspect of this, or sometimes you need to, let's say rotate your tokens, which I assume would not be manual, it would be some automated process, but that would be something that should be happening. So say you generate a new token, you have an automated process go and update it in the classes definitions, and then after a while, let's say after 24 hours, you will GDP all token and expect everybody to be using the new token. If the hot reload would not be in place, this means that all of the workloads would have to be restarted, or at least the telegraph site cards would have to be restarted, with the hot reload functionality in place, telegraph operator and then the telegraph site card would take care of this automatically, and the day two operations are much easier with this hot reload functionality being available. Now before we check, you added the hot reload functionality, was that like two or three months ago, or maybe it's a little bit longer now? It was definitely this year, I don't really remember when exactly. It's all a blur. But that came, that was exactly what happened as part of our internal use cases. So this was a pain point for a lot of things we're doing internally, which is that in some cases, we just want to change some settings or we just want to. So for example, we want to change the frequency at which we get some of the data, because we want to increase or decrease the amount of data we're storing, or we want to move some of the data to other places, like we may be monitoring some data in our internal systems, but we also want to be moving some of the data to production systems, because we want these to be in the same place that our customers use it, so we can also use that for monitoring. Yeah, it was a huge gain because you'd go and like an engineer would change, like you said, the frequency, and then the next question would be like, they'd go and look and they're like, it's not, it hasn't changed what's going on. So having that hot reload, like adding that functionality, which was added earlier this year, fantastic, because and also I wanted to say, as you mentioned, we're using this in-house. So yeah, it was definitely kind of a frustration point when people would make a change and then they'd look for the change and it would take a little bit to, you know, it basically, it would have to wait. I'm going to save Wojcik until it naturally got restarted, which is kind of a funny use of the word naturally, but let's just ignore that. Because whenever the actual application codes changes, then we would still restart it and see the changes, but the thing is, then it could be between a day and a week, depending on how often the code changes. With this, this is a matter of minutes. And like you said, this was a big thing for us and this is a huge improvement for us. So going back to the dashboard and the data we have in here. So right now, we load this, right now I can see the new type. So the field I added and I did not go and restarted anything. So this is the thing we talked about. It's difficult to show it because it takes a few minutes for all the Kubernetes mechanics to kick in and change the underlying secrets and then this triggering the underlying watch mechanism to notice this. But in the Kubernetes reality, waiting a few minutes for this change to get deployed to like hundreds or thousands of telegraph sidecars is very acceptable as opposed to the thing we mentioned, which would be a matter of days or weeks before this data is visible. So right now, I can go in here and see my internal metrics as well. So this is a huge improvement and this is, I think, a really nice feature of telegraph operator. And like I said, we could, for example, one other thing we wanted to show because if we were to see the logs of, say, Redis and the operator here, we would not be able to find the message that the logs were restarted because we keep seeing this data flowing in. But if I were to, say, remove the outs file and deploy that and then wait a few minutes while we perhaps do something else, I'll also see that now I no longer will see my my data being written to standard, which is also a pretty interesting feature. Why would someone use that feature, Wojtek? What would they use that one for? I mean, so I think the standard is more like a debugging tool. So the reason why we include it is we include it when people develop because then you don't have to go to... Okay, so the data is still going where it's still going. Right, right. Okay. Yes. So I mean, one of the nice things about telegraph is we can put it in multiple outputs, right? So we have telegraph operator has extensive document... It's telegraph itself has pretty good documentation of... I wanted the documentation, yes. So we have a lot of different types of plugins and we have a good documentation. So basically, for outputs, I could be writing it to a ton of things. We were just using a file, okay. So we were just using the file and intlexdb. So we were just using this plugin and then we can see it's readme file and we were also using the intlexdbv2. We were using v1, but that's kind of interesting. But basically, we could configure a lot of things and like I said, it has outputs. We could be filtering things that the output level, as you mentioned, is pretty powerful to be able to do that. We could be configuring a lot of things. I think the nice thing of telegraph itself is that if it can't write to one of the... If for some reason it can't write or can't read or something isn't really working at the time, it's going to retry it and it's going to buffer the data and it's going to be smart about how much data it can buffer before it throws away all the data and all of that is configurable, which is a really nice thing as well because we were toggling inputs and outputs and telegraph would just be automatically disabling the ones it has. But technically, I would be able to disable one of the outputs and let's say if it wasn't able to write to another output, it would be smart enough to realize this is the same output I'm just going to keep on raising the same buffer. So we could, and we just changed the configuration, we can see that right now it was just reloaded and it stopped writing outputs. But the nice thing is, we can do all of these and like I said, in Kubernetes world, where sometimes we don't want to restart. I don't know, we have deployments where we have our deployment sequence sets and other types of workloads, but we have workloads for a single type of a microservice who would have hundreds of ports. And then we're starting all of them just because we want to tweak a single setting isn't that great, whereas here we could just apply a small change reloaded and the whole system will just pick it up. And none of the things we started like telegraph itself, the sidecar was not even restart, it was just entirely within telegraph. So we spend a lot of time inside the company across teams to get all of this working. And I think, I think in general telegraph is a really nice tool to monitor Kubernetes, because like I've shown that we support, like it's really simple to support both just using Prometheus metrics and scraping them and they end up going into Kubernetes, which is what we use a lot ourselves internally, because a lot of languages just make it natural to expose the metrics in this format. So it's really neat that we could just specify the port of ports, the path and telegraph operator will just generate conflicts out of that. But also if you know that you're running something that telegraph knows how to scrape, then you can just use one of the many, many, many plugins. And you just inject this small snippet and telegraph operator will glue it together with where it should be output to. And you can also have some additional settings in the classes. So it is really easy to manage. And from our experience, we have large clusters, we have dozens and dozens of those classes we have to manage. It is really useful to be able to do that. One of the community contributed features that I think we'll be showing in the next release that's happening really soon. And I'm really excited about is ability to also reference other config maps or a sequence and be able to be able to reference some of the metadata. So if I would want to get some of the Kubernetes metadata, I can expose it as an environment variable in the annotation. I believe the annotation is something like and field ref. And I can say that my variable name is like namespace name. And it will just be metadata that namespace. And like I said, this is what is the new feature that's coming? We check ability to reference various things. So in this case, I'm referencing a Kubernetes field meaning that this will tell my pod what the namespace is or the name of the pod is. So this would be like pod name. I could also expose that. This is useful in some cases, but we really want to tie this back to some of the fields. But I could also get the IP address of the pod, which I could then use to filter things. But I could also do something like secret key ref and like token. And I could say for my secret, which would be my token secret dot, this would be the key name dot, let's say token, right? So with that, I could, for example, I'll just put the wrong example, let's say token equals token. With that, I could have my token managed by a secret. I could reference that and then telegraph and then Kubernetes would load this as an environment variable for the telegraph site. And I could use it in the configuration. So I wouldn't have to inject it in other places. So this is useful in, for example, using other tools to manage the secrets or the secrets are just managed by our application, because then the telegraph site would get it. This is not where code reload would work because of the way it's working because of the Kubernetes internals. Maybe that's something we could extend in the future. But this is still a pretty nice piece of functionality because if for multiple reasons we have some data in some other secrets, and we just want to reference it, it's much easier than and having to hard code it in the annotation. And you said this is a community contribution that's like kind of in review and will be part of an extra lease of the telegraph operator? Yes, I'm really hoping it will be. And I'm really excited about that because every time we get contribution to telegraph, telegraph operator, and I think that's open source. I think that's that's a really nice sign that people are using the tool and that people are willing to spend their time extending it. So we're trying our best to help whenever anybody contributes in any way. Even if someone just opens an issue, like we've had to open an issue that if they run it, they then we forgot to create the namespace. And we're fixing those kinds of things. And that's also great because this means someone took the time to give it a try. And if something was broken, you also let us know so we could fix it for other people. That's really cool. And Boycheck, did you have anything more you wanted to to show to share today? Or I think we're coming now. I think we're Caitlyn, I think we're we're finishing up with our part of the show. I mean, you guys aren't done yet. We're not done yet. Well, thank you for that Boycheck. I know live demos are always fun. So I know Boycheck, you already sort of answered this. But how does a newbie get his or her arms around APIs? I know you showed the document, the docs link, is there anything else that a community member can do to get some help or? I think going to InfluxDB. So first thing is just getting on boarded in InfluxDB. I think the easiest option is to go to cloudinfluxdata.com and play with the SAS offering because it's there's a free tier that provides most. Okay, I guess my typing was so basically. Now they're going to see where I got my picture from. So basically just just sign up for InfluxDB cloud, which is the easiest way to do this or just run the open source version, whatever you want. Like I just ran it in my, when you discuss there is a container that you could just run. There are finally that you can just grab and run it on multiple ways to run InfluxDB. And then when you go to the UI, there's a way to get started with most languages. We also provide ways to get telegraph configurations. That is a slightly longer process, but basically this is a way, there are multiple ways to get the data. You can also write, you can also directly use the API, but I think we try to do our best to just get people started with whatever it is that they need to do. So I could just say just on my system data and it's going to basically going to basically generate a whole config for me. This is just a telegraph configuration I can save. I can run telegraph on my machine and it's going to start writing data to InfluxDB. Well, and I would like to, so let me tell you what I do to go figure out anything on InfluxDB. I go and find blog posts from the fabulous Anna Ease. So like she has one, it's like TLDR InfluxDB tech tips, creating buckets with InfluxDB API. Like I am completely biased, but I think her blog posts are fantastic for a newbie. And then I think they're also really good for someone who is not a newbie. So I would go look for some of those InfluxDB tech tips where I think she tops through using the API to do different, how to use the API to do some different things. And I think one other thing we're mentioning is Influx CLI. It's a great tool to do a lot of things. So anything from creating buckets to writing data, reading data, and is possible via the CLI. There's also a way to export the import of data and objects of things like dashboards. So it's a really powerful tool and it's also easy to get set up. Awesome. I think also you already answered this, but can you share this data locally? Or can you share this code for us to test it locally? I'm assuming it's all in the repo? Yep. I think that's where you get that read me for that. For that question inspired me to say, oh yeah, we forgot to point you at that. And the make file is also a good starting point because it provides an easy to use make targets like the kind start, deploy InfluxDB one, deploys a lot of things, make kind of test, basically deploys most things. And it's even deploying Redis and showing you at the end that Redis has the site, that has the site container. That sleep is going away. I just need to merge up PR, just didn't have the time to do today. That's going to be a proper kubectl wait. So we will wait for the operator and not just assume 20 seconds is enough. But there is a lot, there are a lot of make targets that just make it super easy to start. And you touched on this briefly, but how was InfluxData using the Telegraph operator internally? It sounds like it sort of was developed from an internal pain point as well. So it was developed, I think, for both internal and external uses, but when we started deploying workloads and we were thinking about being able to handle that sort of Kubernetes clusters and large workloads, we were just discussing how to do this, how to get all the data. And given that we already had Telegraph as a very successful and long project with long history, we wanted to use Telegraph and we were just wondering how to do that and Telegraph operator was just a natural way of doing this. So we use it, we use it a lot for most of our workloads, meaning that with one of the first things we deploy in our class Telegraph operator, so we say automated, but that's one of the first things we deploy. And then from all the workloads we're monitoring, we just add the same annotations like shown. They may be slightly more complex than the examples we're showing, but it's still annotations that we use. And for a lot of the code you write internally, we just expose them as Prometheus metrics or expose them in other ways. So it all depends on what we're monitoring, but we're trying to use the native input plugins, the Telegraph native input plugins, so for things like Redis, we would just be having those plugins get the data from Redis internally. For things that expose metrics, we get them as Prometheus metrics. It really depends on the use case, but most of the things we deploy just has the annotations and Telegraph gets deployed automatically. And just kind of curious, what for both of you, this is a question that I have for both of you, what are you guys working on in the next six months that you're really excited about that the community will like or get excited about as well? That is a good question. So I know that we, I mean, I think we should also go back to the why we're using this sidecar containers as opposed to demon sets, because we do get this question a lot and I'm actually surprised this question hasn't come up. So we deploy Telegraph as a sidecar and this means that if we have lots of workloads, then there is a lot of Telegraph sidecar containers and there are a lot of processes that could be solved by a demon set. And so we chose to use the sidecar because we noticed that a Telegraph report is more successful at getting the data and not being able to buffer it if things ever go wrong temporarily. So it's much more reliable if we monitor a single port, but we're also trying to figure out ways to do something between managing, between running Telegraph sidecar for each port and running it as a demon set, monitoring all the nodes, all the ports in a specific node. So just to explain briefly, a demon set is something where there is a one port for each Kubernetes node, so for each dedicated VM of a metal machine, depending on where the Kubernetes is running, and we're trying to figure out if there is a way to also handle workloads that don't really get a lot of metrics without injecting the Telegraph as a sidecar to each individual port. And I think that is an exciting challenge because maybe there could be some compromise, like some things being monitored as demon sets and some things being monitored as sidecars, but we don't really have a good answer to that yet. So we're trying to tackle this because that's one of the things that could be helpful for us internally as well. And I'm sure a lot of you have this issued up. A demon set, so one for all the ports is too much data to gather and then a sidecar for every single port is too many resources being used for sometimes really small microservices that don't get involved a lot. And what about you Pat? What else, I mean, you're nodding along, so you clearly agree, but anything else? In terms of like from the perspective of the Telegraph operator, I think Foycheck, I think Foycheck covered it, but just generally, in general, you're just going to continue to make influx DB, our cloud two SaaS product, screaming fast and make it, my team is working to continue to make it so that our developers can deliver sweet, sweet software to the users more quickly. So that's what I'm always excited about. Awesome. Well, thank you both. I feel like there's going to be a lots of people checking out this webinar and they might come bug you in the community slack with follow up questions.