 Hello everyone, and welcome to today's webinar. My name is Katelyn Croft, and I'm joined today by Zoe Steinkamp, who's gonna be talking about getting better observability with open telemetry and inflex CB. Please post any questions you may have in the chat or Zoom Q&A, and we'll answer them at the end, the session's being recorded. And without further ado, I'm gonna hand things off to Ms. Zoe. All right, awesome. So today we're gonna be going over how to gain better observability with open telemetry and inflex CB. My name is Zoe Steinkamp. I'm a developer advocate here at Inflex Data. Let's see, there we go. If you wanna reach out to me on LinkedIn or add me, feel free, but basically as a developer advocate, part of my job is to advocate, not only for our company, but also for some of the other open source tech that we work in, like the open telemetry project, as well as just in general to get feedback and such. So first things first, our agenda. So we're gonna first go over an introduction to open telemetry, which is going to include logs, traces and metrics, an overview of the inflex DB cloud powered by IOC. So that's the new version that we just released. We're gonna go over a few of the key features that make it useful for open telemetry data. And finally, we have a project that you can actually pull from GitHub and follow along or you can just kinda watch my slides. And then we're also gonna do a live demo at the end of it. But basically this will allow you to hook up Jaeger, Grafana, Hot Rod, which I'll go into later, but basically it creates traces, like it creates like fake traces. If you're like us, you don't have like a server to pull from and then telegraph. And finally, more learning resources at the end. So first things first, an introduction to open telemetry. So logs, traces and metrics. So logs or records or events or messages generated by applications or systems during their execution. And one thing to note here is for the longest time inflex DB has been what we call a TSM, been more of a time series metrics database. But now with our new IOC's engine, we will be able to store logs and traces. Now the project that we're gonna be going over later here, it mainly focuses a lot more on the tracing side, but we will be also getting logs and metrics as well. So basically this is the, I guess you could say the circles of where these all come from, but metrics are more like, sorry guys, agreeable events and logging is more like every type of events and tracing is request scoped. That means that like when somebody clicks a button on your website, you're more likely to get a trace about it, whether or not they got a 400 or a 200. You know, whether things went well or things went bad. Logging can still be that button press, but it's more likely to be just in general, more what's the word, like a trace can be uneven. It could take hours for another person to press that button. You know, it might not be consistent, but logs and metrics are normally a lot more consistent in that they are tracking constantly throughout the day. So these are the three big things that the open telemetry is currently working on tracking. And these are very, very important metrics, especially in things like deployments and DevOps. This is kind of a nice architecture diagram of how this ends up looking. So I'll go over the key points of the open telemetry project to the next slide, but basically what they're trying to do is create everything in one space for you. So it used to be that it was kind of play at yourself game of, hey, pick whatever you want to do your traces, your metrics and your logging and maybe some other stuff too while you're at it. And you can just use all of these different types of collectors and where you stored stuff. And it was very much everybody builds it how they want to build it, maybe by the tools they already knew or whatever tool was popular. But basically people were kind of spread out into the tech that they used for this. The idea with open telemetry is that you use one type of collector, which will grab all of the three different types of data and it's more streamlined. So you also know what data to expect because in open telemetry they normally tend to define what parts of logs and traces of metrics are going to be storing. Now that being said, you can store obviously larger amounts of data points if you have them. More things normally that go to things like location-based data and such, but overall what they're trying to do is just streamline this process because it can be very frustrating as the DevOps engineer to, I don't know maybe change a job or when you change different possibly cloud providers or something, it can be frustrating to now have to use brand new tooling for all the same problems. That's not super fun. So the idea with the open telemetry project is it's all just wrapped up into this service. So it's a single vendor neutral collector binary and a vendor agnostic instrumentation library. So basically what these wonderful words mean is that there are many different types of collector binaries and they're vendor agnostic. It doesn't really care if you store your data in Influx CB or if you're storing it in something like a SQL database, depending on what you're planning to do, it is relatively agnostic. The only big thing I would say is that the vendor neutral collector is technically accurate in that any vendor can create a collector like we have, but you do have to like follow their docs and stuff. So not every single vendor is going to be, is going to have an open telemetry collector just because they have to create it themselves. It's going to have end-to-end implementation to generate and make, collect, process and export. So that's like what I was talking about before. The whole idea here with open telemetry is that it starts from the beginning of that collection process all the way to storage and it's all streamlined so you don't have to do it yourself to an extent. But it's still full control of your data with the ability to send to multiple destinations in parallel, like I was saying before, it doesn't care if you're sending it to Influx or to somewhere else or as it is actually quite common to multiple places. Open standards, semantic conventions, that's basically just saying that the vendors can easily build up the data collection agent. It's not necessarily difficult. It's just something that they have to do. And obviously a path forward no matter where you are in your observability journey. So one thing to note is why I'm talking a lot about this is obviously this project is using an open telemetry collector that we built ourselves, but also because we've been working side by side with the open telemetry project for about, wanna say two-ish years now, it's been more intensely working together in the past six months I would say, but we've been following the project and some of our engineers have been making commits and such for over the past year. So now we're gonna go into an overview of Influx DB Cloud. And this isn't gonna be like a massive overview or anything like that. Like I said, we're gonna be focusing on those key features that make it relevant for this type of data. So first things first, the new engine is built on top of Rust, Apache Arrow, Apache Parquet, Arrow Flight and Data Fusion. So what these allow us to do is Apache Parquet is a Parquet file format and Apache Arrow allows us to be built with a SQL in mind. So the idea here is that we're storing files in that Parquet file and it allows us to be able to integrate with more connectors and it also allows us to store in a more efficient file format. And with Arrow Flight and Data Fusion, it allows us to have SQL connectors. So going forward, you'll be able to call Influx DB instead of with Flux, you'll be able to call it with, sorry, you'll be able to query it, not call. You'll be able to query it with SQL. The calling feature is part of Arrow Flight though. Those are, I'll go into it when we get into the project but basically these are the key technologies that the new engine is built on top of. And you can read a ton of blogs that go into, Paul especially loves to talk about why he built with all of these and all the features that they offer. So this is also our new architecture and deployment. So some of you guys probably haven't seen the old version of this, so that's okay. But basically the idea is that now your data sources, it's all time stamp data. So it doesn't matter, like I said, if it's a metric or if an event or a log or a trace, we can accept it all as long as it's still that time stamp data. The data collection is pretty much the same I would say. It's mainly just telegraph from the client libraries. Those are the big two. The client libraries are also being currently revamped because not all of them currently can do SQL curing but going forward in the next couple of months, most of our top five will be able to and I think in the next year, almost all of them will be able to. Finally, data storage and transformation. So this has kind of changed a bit. We've focused a lot more on the collection and the storage and then the SQL queries, which can't do quite as much yet as what capacitor used to offer. But we are planning to bring back certain features that capacitor offered. And we're in general building out a lot more documentation and working on the data fusion library to allow our SQL queries to be able to do things like down sampling and such. And finally, data visualization and analysis. So this one, especially we've been focusing a lot more on the integrations. So when I do visualizations in this project, I'll be showing you on the Yeager UI as well as Grafana. So we focused a little bit less on our own visualization library because a lot of people just tend to use these outside tools. So unlimited cardinality is the first, I suppose you could say big piece here. So what cardinality means is that back in the day, you would have a trace or something. And a trace can be pretty large. It has approximately 30 to like 100 plus like tags, you could call it on it, like data points on that value. So it would say things like where the trace came from. It might say even like the server it was going to, it would just have a lot of information basically. And the problem with our old DB is that we would kind of max out at about 30 or so tags before things started to get a little hairy, I suppose you could say. But now we have unlimited cardinality, which allows, I'm calling it unlimited, but it's really up to about 200 tags, which is still a huge amount more and will encompass most data. Like it will deal with traces and logs perfectly fine, no problem whatsoever. So we've solved this cardinality problem where now you can have a much larger amount of fields and tags on your data source. So that means that you can have more data basically on it, you can have more things like locations or metadata, et cetera. So that was a big piece, obviously. The native SQL support. So I will say that SQL is not necessarily a requirement for dealing with open telemetry data. I'm just gonna put that forward. But SQL is really nice and that a lot of people like to use SQL, a lot of people, I don't really like say coding in it, but they do, for lack of a better term, they query with it, they code in it, they work with it a lot. And so this allows a lot more people to be able to use this platform and be able to query it back out. And actually a lot of integrations that we're working on, things like Power BI and Tableau, they expect the data to come back via a SQL. Like they expect to be able to query in SQL to get data out of DBs in general. So that's another key thing that will help in the future is that now that you can query with SQL, you will be able to integrate with more features, more integrations, more vendors. High performance data ingestion. So this has always been kind of the case, I would say actually, which is that we've always been pretty good at handling high writes and query loads. But it's gotten even better and we're going to have in the next couple of months a lot of data coming out about this where we're showing off basically the capabilities of the new engine and where it, we're gonna basically do a comparison of it versus the old one. So we can kind of show where it shines and how it's making things a lot faster. And in general, this is a good thing because again, open telemetry data tends to be a pretty high ingest rate. Like I said, traces are not always necessarily consistent metric, but logs and metrics are definitely very much like they're noisy. Like you're gonna get them every single second kind of deal. And sometimes traces can be the same, especially on a very busy website. Seamless integration with observability tools. So this is also what I was kind of already talking about, which is the SQL allows us to integrate more, but we're also working on more integration. So this one is the one we're already talking about, open telemetry. And I would say that's the biggest integration that we're working with outside of pandas and a few other data science tools that we're also working on. But we also have integrations with Grafana as well as we work with tools like Yeager, but we don't necessarily have what I would consider a full integration. We don't work with them as closely, but we do work with like, it's easy to work with the technology. So now I'm gonna go ahead and get into the project. So first things first, for those of you guys who want to grab it and I would just grab it now. I'm trying to remember if it's linked at the end of this and I cannot recall. So I would grab this link now just to make life easy. But basically what this is is a GitHub project where we are storing open telemetry data within Influx TV. We're running this on a Docker server. So, and as I said before, hot rod basically allows you to create traces like fake for lack of a better term, it creates fake traces for us to use, a kind of show how it works. And then after the open telemetry data is up into Influx CB, we go ahead and show it off in the Yeager UI. We'll also show it off in the Grafana UI as well. Parts of this can technically be replaced with Telegraph. And one thing to note is that we are currently in the process of getting our open telemetry collector on the official open telemetry collector list. We've been working on this for a while. It hasn't been necessarily the most straightforward thing, but our engineers are working out, I suppose you could say the minor kinks of this and then they're gonna finally put it up on the open telemetry project. So as I mentioned before, hot rod basically creates as it calls it rides on demand, it's creating traces and other such data. So basically what you do, and we'll see this when I actually pull the project up, but you basically click on these buttons at the top and it's creating traces for you. And the idea here is that these are different, you know, like websites, different services. So you can basically compare traces from multiple different sources. So Yeager is where you can actually start to see these traces. So Yeager is a completely open source library that allows you to visualize trace data for like a better term. And it also gives you some pretty awesome features like these kind of trees and such that can tell where your traces are coming from, where you're getting the most common amount, et cetera. I'm not gonna go in depth on the Yeager UI just because honestly it's just not my forte, but for those of you guys who wanna check it out, obviously this project is using it and we will go over it briefly as well during the demo. But basically Yeager is a super awesome open source library for trace visualization. Grafana. So we also built our own Grafana dashboard for this project. So with this, you'll be able to see, again, a lot of traces data. We also have a table or two, I think, dedicated to logs and metrics as well. But again, like I said before, this project does focus a little bit more heavy on the traces, but it's not a super big deal to just start building out some Grafana dashboards about the logs and metrics as well. I would say if anything, that's probably the very much straightforward thing to do. And we have a brand new plugin with Grafana actually, which will allow you to query in SQL. So that's a super fun and exciting thing. And in the docs for this project, we talk about how you actually hook up with Grafana using our new, we call it the FlightSQL integration. And you'll also be able to see your data within InfluxDB Cloud. So this is where you're actually storing that data. So you go ahead and you, for those of you guys who haven't actually checked out the new Cloud product, you haven't seen this before, basically what we're doing here is we're getting into our hotel bucket, which by the way is in the project with the dashboard, sorry, the database is called, it's called a bucket and we called it hotel. From there, we're asking for the measurement of logs. You can see all the different measurement options that we have here. Again, this is all based on how open telemetry tends to like your data to be stored. And also I think we've done a little bit of editing on our side as well, but basically you can see those results here in this table. So you can see all the logs that are available. And this is a very small screenshot of this, but basically these kinda, what's the word? You can like scroll right on this and you'll be able to see more data. Again, when we actually get into the demo, it'll kind of all wrap itself together. So this is a very wordy slide that basically goes over everything inside of the readme for the project, but I'm just gonna kind of walk us through this. So obviously you'll need an Influx DB cloud account because that's where you, that's where Influx DB three is currently living. You create two buckets, hotel and hotel archival. Now, one thing to note here is the hotel archival is optional. You don't have to do that. Basically the idea with that is to show a cold storage options that one was supposed to have like a longer retention policy. Personally, I just do the hotel bucket. You create an environment file with authentication credentials. You install FlightSQL as per the readme. So FlightSQL will be what allows you to query your data. You build and run the Docker images as per the readme. You import your dashboard with the JSON that we have provided inside the demo, Grafana dashboards. So I've already done this in my project, but I'll show you guys in the readme. And basically from there, you can create your fake traces by clicking on a customer on the hot rod application. So Grafana setup details. So this is gonna be best just read in the readme, but basically when you set up Grafana because you're setting up a local host version of it, or sorry, an open source local host version of Grafana basically, just make sure when you set your credentials you make them nice and easy because these are not public. And also when you go ahead you'll want to import this dashboard. And again, this is best explained within the readme, but basically what you're gonna be doing is you're going to pick the flight sequel integration that is now offered as well as a Yeager as well. You're gonna name it open telemetry and then you're gonna need to upload that JSON file. So let's go into the demo. So either way you kind of take a better look at this. And we're gonna do some live coding, all right. So give it a second here. It's gonna go ahead and get Docker up and running. Great. You can see obviously we've got some Yeager, some Grafana. Great. I'm gonna move this zoom thing out of my way. All right. So this is the project. Well, specifically this is a version of the project that my coworker built up. We have the main one, which is forked from InfluxDB observability. But basically what this does is this is the project that hopefully you're following. If not now is a great time to grab that link. Basically here it talks about the credentials inside the environment file that you're gonna wanna add. It's pretty straightforward. It's basically just your, I'll show this where you actually get this from. But basically it's within your URL inside of cloud. Your token, your organization, your bucket and again that archive bucket if you want it. And then finally, build the needed Docker images which I've already done. That's why I only had to run the Docker compose file. And then finally you'll be able to see traces are generated by hot rod, browse to hot rod at the local host 8080, query the traces here. And Grafana is available on local host 3000. So let's go ahead and check this out. So I'll probably have to refresh some of these because I'm almost certain they're yeah a little bit on the old side. All right, so we're gonna go ahead and get Rachel's floral designs. We're gonna refresh all of these pages. So let's go ahead and find some traces here. So it's found us two traces. Awesome. We'll go back to hot rod and maybe get one more. And we've got three traces. I guess maybe it actually did keep my old traces. It's interesting. Let's see, see if on the front end what we can find. Yeah, like I said, this is not my per se forte. Let's see here. That's weird. I thought it would actually do it for me. Oh, there we go. But as you can see, we're getting some traces on the front end. A lot of them are going currently through Redis and a few through MySQL. So this is a nice little system architecture here. You can also compare traces as well if you can actually find them in here. Let's see. I don't wanna necessarily wanna deal with grabbing trace IDs and such. But basically this is how the Yeager UI will look once it's all up and running. It's pretty straightforward. For those of you guys who are a lot more comfortable using this, I'm sure you guys know exactly how this would work and how you would want to find your data and such and sort through it. But let's go ahead and load into Grafana. Okay, let's see here. Yeah, there's my open telemetry one. Oh dear. Oh, oh. Looks like my data might be loading in a little bit. Give it a couple of seconds. Looks like it might be having issues with the data source. Well, this is life coding for you guys. It was working 30 minutes ago. Of course, now it's not as happy. But as you can see, we've got some of our traces down here. Yeah, I'm not quite sure why this is not finding its data. I'm gonna refresh this page one more time. You never know. Okay, maybe not. Sorry guys, I don't necessarily wanna do a bunch of debugging while we're looking at this. But normally with this dashboard, you would be able to see all these wonderful graphs as I showed in the screenshot. I am trying to figure out why they are not working. Let's see if we change it to anger if it's happier. No, that did not help at all. But yeah, so we can see our disturbance latency histogram and we can see the traces that we've already created. And we'll go ahead and create a few more traces while we're at it. Cause yeah, I'm hoping that maybe eventually, yeah, go ahead and reload it. Yeah, we're getting more traces, but unfortunately these are not loading in. I don't know if maybe I need to reload in my dashboard since I've restarted this now, but obviously my dashboard did reload in just fine. I'll take a look at this. It's probably me. It's probably less to do with the project and probably more to do with me doing something wrong possibly. And then we'll go ahead and go to cloud. We're gonna go ahead and log in here. So in here, we should be able to view from my hotel bucket some of the data that I've got. Yeah, there we go. Go ahead and look at logs. And we can do things like by fields, like their attributes and their names. But for right now, we'll just go ahead and run with this. So as you can see, I have my logs here. I have quite a few rows of them actually, and I'm only getting it for the past hours. These have all been created in roughly the past like five minutes or so. And as I was trying to say before, you can scroll this over a bit. So you can see the trace ID that goes with the log. And this really can't be visualized. You have the option of visualization here, but for example, this doesn't really work for visualization. We can see maybe if calls total, some might work. Yeah, no. Most of these are, I would say that most open telemetry data is a little bit more, what's the word here, specific. And the visualization libraries, you're gonna need to use for it. Things like Yeager or Grafana are much better options than just the graphing that we offer. Our graphing is more to make sure that you actually have data inside your DB, which I would say the table does a pretty good job of showing that, yes, clearly we have some data here. Yeah, it sucks that unfortunately our open telemetry is just not quite working as it was 20 minutes ago. Yeah, I think it wants to be in Grafana. I don't know why it's so upset. That's okay. Yay, by me clicking on it, it worked. I didn't do anything. See, guys, this is magical. So, yeah, so this is all of our services running. So you can see the scrolls quite a bit down because even in this past like two minutes, we've got quite a bit of services. See, if you just click around enough, you can fix everything. That's how this works, right? Well, maybe not everything. Well, now it's the same color field not found. I don't even know what that means. Maybe a color field not found. I don't know what this is. I do not think this is what I wanted, but that's okay. We'll allow it. Let's see if we can get this one fixed. It's funny in that I don't even necessarily apply things and it still manages to fix itself. No, no data found in the response. I was hoping the traces of all things would work. Oh, well, I at least fixed two things by just screwing around with them. But yeah, I like I said, I'm not gonna, this is just part of life coding. It's occasionally things don't quite work as expected. But again, over here, you will find that you can find the dashboard to import under this file up here. So it's pretty straightforward. It's just this demo folder and then Grafana. And from there, you'll be able to find all of the charts to grab. You can already tell that Docker is having a adverse effects on my computer. If you would like to access the trace notary that makes sure it's enabled with the Yeager data source, which as far as I know, I've done. The images are automatically built and pushed. Yes, that's right. So these images, this hotel collector, it's automatically being built and pushed to Docker. You can check these two out if you want, but basically they're just talking about where the file is being held. And then this is talking more about the hotel infox collector, which like I said, we're currently working on making this a more, what's the word, widely applicable I guess is the right word for this. Like I said, eventually this collector and receiver are both going to be available on the open telemetry project. We're just wrapping them up and putting a bow on them so they're all ready to go. The Yeager query plugin for infox DB, which obviously enables querying traces stored via the Yeager UI. And if you wanna go ahead and do some tests, you can also run those as well. So like I said before, this is a fork off this original project here, done by one of our engineers, Jacob Marble, who has been working a lot on the hotel projects. So you can also check this out as well. It goes, it builds the files a little bit differently, the way he's doing his authentication and such. Oh, speaking of which actually, real quick. So here is where you get your URL. It's this piece of your URL. So you're like US East is where I'm currently working out of. If you wanna get your bucket ID, that is available right here. So that's your bucket ID. And then, really quick, I'm gonna see what the final piece of the thing is just the token, infox org, okay, token. So your organization is this one right here. You can also find it within, let me see here. Let's see settings, orgs. So yeah, you can also find it in here. Sorry, I'm like looking for it, but basically you can find it in there or just grab it out of the URL. And then when it comes to the API tokens, you're gonna wanna generate them over here. Quick note, if you create an all access token, it gives somebody access to everything within the UI, or sorry, all of your buckets. Like it just gives them access to absolutely everything. So we do warn against doing this. Otherwise you can do a custom API token. So with that, you're gonna want to, for example, the hotel one, do make sure that when you do this, you give your bucket both read and write permissions because otherwise we can't write the data in and you can't get it back out to send it onward to Grafana or Yeager or anything like that. So just make sure that when you create your token, you give it both access. And yeah, as far as creating buckets, it's pretty straightforward. You just go over here and you get a name like hotel two or something. And then you put in your preferences for when it would be deleted. And then from there, you just create it. So it's very straightforward. So really quick, I'm going to turn off Docker just cause it seems to mess sometimes with my slides. I'm gonna turn this project off. There we go. I'm gonna put that down at the bottom. So learning resources. So this is a try it yourself. So the Influx community is where that project lives. That's where the observability, both observability projects live. The one that Jacob created and the one that my coworker Jay created, which allows that Grafana dashboarding. And then influxdata.com is obviously our main website. It's where you can sign up for the cloud account as well as just in general, find more information about us, the things we offer, what we're used for, et cetera. Further resources here. So that would be things like getting started influxdata.com slash cloud, always a good start. The community forms in Slack are both places where you can come to us for questions and answers. Our GitHub is, this is again a link to the Influx community. In general on GitHub, you can also find, obviously all of our open source, all of other libraries, et cetera. Our book and documentation, talk more about how to get stuff up and running, why we do what we do. Blogs are a great resource to see new features that are coming up, as well as user use cases and such. Blogs like to talk about like what our customers are doing and such. And finally, InfluxDB University is a great learn at your own pace resource. So that's completely free. So you can pick up a class there on something that you wanna learn about and take your time doing it and it's completely free. And that is the end of my presentation. So really quick, I'm just gonna put us on that QR code. And then we can go ahead and take some questions. All right, there we go. Perfect. And also, Zoe, I threw in that GitHub link into the Zoom chat. So if you guys don't have your phone ready to scan it or you want it on your computer, the link is there. And also I threw in a link to an upcoming webinar that I mentioned earlier. It's with Gary, a product manager. And so you can come and learn all things InfluxDB 3.0. I'm sure you guys have lots of questions around that. So, all right, now let's jump into the questions. Could open telemetry be a replacement or an enhancement to my current telegraph setup? So they're not per se replacements or enhancements. So telegraph itself is an open source ingestion agent, which means it's pretty wide as to the use cases it can be used for versus the open telemetry collector that we're currently working on is just for the open telemetry project. So it focuses entirely on those traces, logs and metrics. Now you can get that data with telegraph, kind of like how it says here, parts of the collector could be replaced with telegraph. They're not technically a part of the, telegraph is not a part of the official hotel collector list. And I don't think it technically ever would be, maybe one day it will. But so if you're using telegraph for something, the open telemetry collector might just be the, might be the replacement if you're dealing with open telemetry data, but it would probably never necessarily be like an enhancement per se, if that makes sense. They're just different use cases for the most part. Regarding cardinality, what was the previous limit and what is the new limit for this same influx DB 3.0? So in previous, what the limit was is you could have, you could have a, basically it was based off the way that we queried our data back out. So normally you would have a timestamp, a value, a field and a tag. So fields would be, I think fields are the one that you would run into problems specifically if you had a bunch of fields because we would query off those fields. So if you had like 30 plus fields attached to this value and this timestamp, the problem would be that the queries would be really slow because we were querying them on the backside basically. Like we were pre querying them to take into effect that you had all these. I think tags, you could have a lot more because we weren't querying them in general, but that wasn't helpful then if you actually needed to query off the data. Like it's not helpful to just throw all your data basically in a tag and be like, okay, well, it's all tagged but now I can't search via those tags. I needed this data in a field so I could actually search it back. So I could be like, I only want the location of my server in the Netherlands. I don't want all the server information from Sweden kind of deal. So now with the new limits, it's more like 200 rows. So now you can have like 200 fields basically. So, and I think nowadays when we, I still am learning a little bit more about how we redo our schemas nowadays, but basically it's less separated between fields and tags and more just in general timestamp value, everything else. And that everything else category can have over 200 values before things start to get a little hazy. I suppose you could say, and even then I think you could push it, you could go above 200. I think it might just have some effect on the query ability, that speed. Will IOX and therefore support for open telemetry ever be added to InfluencedDB OSS? We would like to use cloud but have certain dev and production development use cases which makes cloud unattainable in about 10% of our deployments. So with that, we are currently as a company as a whole I would say we're figuring out what we're going to be doing as far as OSS with IOX. So with that I can't really tell you unfortunately I do believe that it will eventually reach open source honestly but when it comes to support for open telemetry the open telemetry collector itself will be open sourced because that's what open telemetry is in general it's all an open source project. Now if the collector and from what I understand because the collector is just something that pushes data into InfluencedDB OSS it shouldn't have an issue where you could still use it for the OSS. Now don't quote me on that I'd have to actually go and test it and determine that that's the case but in my mind it makes sense that that shouldn't be an issue because the only issue that you currently might run into really between version three and the open source two is the fact that we have that sequel and that's more of a problem when you're getting the data back out less of a problem of the data in and this hotel collector is going to be for the most part all about streaming the data in. Obviously we do also have the hotel, what's the word? It's like receiver and the one that goes the opposite direction and that'll have to be kind of figured out basically but unfortunately I don't have a great answer for this right now. We should have an answer though for this question and in general questions around open source in the next month or two I would say. Yeah I would definitely say keep an eye on our blogs and everything as we are continuing to roll out. We have a lot planned for the rest of this year as far as the rollout of all these new features so I know this sounds vague but stay tuned. And what about Influx CB Enterprise on-prem version? Will it have open telemetry support? I have absolutely no idea. I want to say hopefully yes it will but I know less about the on-prem like plan for the engineering for on-premise unfortunately. I'm not quite sure what will be added from Influx IOX. I know there's been definite talks about yes that IOX will basically be integrated on top of Enterprise to an extent and so it will it will reap most of the benefits I guess you could say. And so I think that's the hope is that on-prem Enterprise will still receive a lot of these great benefits that IOX has to offer including things obviously like open telemetry. What is the protocol used by open telemetry? Is it GRPC? Yes it is. I actually I have to admit I have a second laptop here and I looked this up because I saw this question so it says here the specification defines OTL is implemented over GRPC and HTTP 1.1 transports. You can go and check out their docs which are quite large about the protocol details. That would be my suggestion but yes it does appear to be built over GRPC. What is the difference between AWS Timestream and Influx DB? So the big difference between us and I do have to admit I don't look at AWS Timestream super often is that we do have a lot more, what's the word here? Availability options, features, there we go. That's the right word features. We tend to have a little bit more features than Timestream and because we are more focused on being more friendly I suppose you could say to the open source community and environment, we tend to hook up to things a little bit better like we have a dedicated Grafana integration, we have dedicated integrations with lots of different open source vendors and non open source obviously Grafana as a closed source too. And so that is one thing. And other thing is that we're actively obviously working on this project. We wouldn't have IOX here today if we weren't actively working on things. And from what I understand Timestream is a lot more of for good or for worse a consistent product. It's not necessarily being actively improved or doing lots of fun new things with it but it stays consistent. That's the best I can do because I haven't looked at it recently. And I will say this, Amazon they have so many products under their suite under their umbrella that it's always mind boggling to me personally when I look at AWS and realize all the new products that they've come out with. So obviously I'm biased towards Influx DB and so is Zoe but we like to say that Influx DB is purpose built for time series data. That is all we do. All we do is work on our time series database and all the different components to the platform. So it takes a lot to build a database and we have an entire engineering team working on it. And I will say this, there's also other time series tools or other tools out there that you can use time series for but they weren't necessarily engineered for that. So it can't handle the really high ingestion that is natural with time series data especially when you're starting off. And also we don't have any external dependencies. There's other time series tools out there that are built on top of Postgres and other platforms. So it can slow it down a little bit. And on that, someone just asked how does Influx DB compare with Prometheus? And Prometheus is great. I know a lot of people who love Prometheus as an observability or DevOps monitoring tool. However, it can't scale out. From what I've seen, it's really great for smaller scale projects. Zoe, is there anything else you'd like to add to that question? So the big thing with us versus Prometheus is we also offer that cloud offering. And from what I understand, Prometheus has an enterprise offering which can scale out quite well. But I think that's more of an on-prem solution for them and then they have their open source but they don't really yet have that middle in between where there's the cloud option. Is it possible to use Flux with Influx DB IOX or do we need to switch to SQL? So this is my fault and I'm sorry guys for not saying this a little bit better. Here, let me go back here. So yes, Influx DB IOX also supports InfluxQL and Flux. We are taking backwards compatibility very seriously. We're actually currently working on InfluxQL with that backwards compatibility. Apparently the Flux wasn't such a big deal to do it but InfluxQL is proving a little trickier but our engineers are currently working on it. So yes, you can still use Flux if that's your preference. The only thing that we're currently kind of working on is if you're already using Flux, don't worry, don't be afraid, don't run away but we are trying to have possibly newer users will rely a little bit more on the SQL. But yes, if you're using Flux, don't worry about it, you can still use it. So there's another question around this Zoe. Does open telemetry replace telegraph for network observability or monitoring? Let me really like, one sec, I'm looking something up. So the answer is possibly, if you're currently using telegraph to get your network observability monitoring data and obviously telegraph has a lot of different options like you could be doing something like you're getting it from like, I'm trying to remember the one telegraph one but like we have like ones that connect to like AWS and such where you can get all of your cloud data as well as a few other integrations that we offer. Yes, it might be a replacement, but it might not be because do remember that the open telemetry collector and receiver and all that, that is within the open telemetry project. And although I think it's super great and the project's super great, just remember it does mean that you're tying your horse to that open telemetry project and you're now within that ecosystem and that might not be okay for your company or honestly, if you've already done the work and you like what telegraph is doing for you like telegraph is working perfectly fine, do not feel the need to like go make yourself extra work if telegraph's doing what you need. Like that's, this is meant for people who always wanted this connector or looking for this connector as they're building out their observability, I'm gonna call it platform as they're building up their observability solution, they might be able to use this collector but if you're perfectly content with what you have don't worry about it. Can you talk a little bit more about the client library support for Influx DB 3.0? We've already seen cases in the past where the client libraries for Influx DB are moving to community only support. So I can tell you right now because I have this answer in my back pocket is that we will be building flight sequel support for Go, Python, C-Sharp, Java and JavaScript. Those are the five we are actively working on the Python one and the Go one I think are actually both technically done, they're just being I think extra tested basically but those will always be supported by us for sure, they're going forward into the future we're working on them right now and we would like to get all of our current 12 client libraries. Those five are the first ones to get done especially the Python and Go ones but after that because they're the most to be honest, they're the most popular like we have the data to support that those are the ones people use the most but after that we are going to be looking at building out the other I guess we currently offer 12 so the other seven that we would currently be offering and I think in all fairness, some of those client libraries aren't as straightforward as others like some of them are not language specific I'm trying to remember what they all are but yes, we are looking to support all of them. Will the influx DB ecosystem with telegraph and capacitor be updated as a whole? Telegraph to be honest lives in its own world. Telegraph is always being updated because it's open source and again because I mean the output agents are definitely being updated because they're being updated for SQL or we're just making new ones as well because it's one of those things where you can just keep creating output and input agents capacitor is in a state of flux, I suppose you could say and I have to admit I don't really know the answer for that one. I think the team, you know, we if people have been following along with the company for a while you know that we've been really putting a lot of our engineering time and effort into making influx DB cloud even more robust especially with the new storage engine which we've called IOX and everything else so I don't think capacitor has been worked on. It's still there but it just hasn't been a focus of the team. How do you agree with that? Oh, sorry, sorry Zoe. That being said, you know there's so many other ways that you can do real-time learning. I wouldn't be too worried that there's plenty of other options out there. How can you monitor SMP devices with open telemetry? Let me look this up really quick. I actually think we might have a I have an SMP telegraph plugin, weirdly enough. I think we do. I know it's come up in the past. Yeah, so currently open telemetry doesn't have anything in particular about SNMP. There are some people who are writing blogs about how to do it though. It looks like open telemetry has a receiver that you can use for this. So that would probably be your best bet. I'm really quick looking something up here. I did throw in the SNMP agent protocol monitoring integration. That's what I was about to say. Yeah, so that's a telegraph plugin and that would be a great way to do this. It doesn't really look like the open telemetry project is really focused on SNMP in particular. It looks like you can do it for sure because it's agnostic technically so it can fit anywhere. But if you want something that's a little more specific I would check out the telegraph SNMP agent that is available. The bit of a mouthful having the N and M next to each other. I always miss one of the letters. Yeah, when I searched it I missed the N but it still came up with what I wanted but I was like, oh, I think there's an N missing here. It's like neither of them are silent. Okay, how would I use open telemetry? Or sorry, how does open telemetry use telegraph on Inflex DB network while observability and monitoring functionalities? So just to clarify here, open telemetry doesn't use telegraph. They're not the same. Like I said, you could see them as competing in the fact that they do similar things but they won either one of them make any money so they don't really compete against anything but they are not working together. The open telemetry collector might remind you a lot of telegraph collectors to be honest, like that's kind of what it reminds me of at least but telegraph is one normally more specific about the, what's the word here? The product, the agent, the thing that it's worth like the fact that we have a SNMP agent in particular that's all it does. It just collects data from that device or when you're monitoring your cloud stuff it's specific to AWS. It's specific to GCP. It's very specific and what it works with. Now there's 300 plus plugins so the options are quite large but the auto collector is meant to work on anything. So that's meant to work whether you're a front end chocolate shop or your AWS server infrastructure like it's meant to work at a large scale on a small scale on a front end versus a back end. They actually have a lot of architecture diagrams based on whether or not you're building out like a data science platform or a front end shopping platform. So they're a lot more agnostic in that way versus telegraph which is a lot more specific for what the plugin's gonna do. And but one thing they do share in common is they both define the type of data that you're gonna get back. So telegraph normally tells you all the data points you're gonna receive and normally you can kind of edit it and say like I don't want these things or I want these things or whatever. OpenTelemetry does a similar thing where it says this is what you're going to get back for your Chase's logs and metrics. This is the standard that we have set for you to receive this data. So in that way they are similar. How does Hotel handle NetFlow data? Let's see. Sorry, I'm looking it up on my other computer. Let's see, what do we get? They don't have anything. Let's see. They don't have anything in particular for it. Again, I think because OpenTelemetry is so broad in its use, they don't have anything in particular for NetFlow data. So yeah, so also for those of you guys who don't know, NetFlow stands for network flow data. So it's, I mean, it's kind of similar I suppose in the logs and traces, but I don't think, I have to admit, I don't think the OpenTelemetry project is necessarily for NetFlow data. I don't want to say that though, because in all fairness, I don't work for OpenTelemetry. Like I'm not, I'm very familiar with the project. Don't get me wrong, but I'm not a part of like their board meetings or anything. So I don't know necessarily what they're focused on use case-wise, but I don't think that's really what they would be for. But you could explore a little bit further, maybe ask a question in their community and they might be able to get back to you. Awesome. Wow, well, that was a lot of questions for Zoe. So if anyone has any other questions, we'll just keep everything open here for another minute. Zoe, thank you for that awesome presentation and handling all those questions.