 Hello everyone today we'll be discussing about Deploy a full CNCF based observability stack in under five minutes with tops So let's get started What is an observability stack? So observability in general is an ability to infer internal state of the system Using the system's external output. So this output can be metrics logs and traces. So these are the three main data types of observability So about me so let me introduce myself. So my name is Vinit Pothlaparthi I'm the product manager at timescale working on observability Applications team primarily on prom scale and tops. I'm also the maintainer of open telemetry operator So if you're already using open telemetry operator, I'm always into here audio feedback And if you're using open telemetry collector and Kubernetes cluster, you should definitely try out open telemetry operator It just eases your Deployment and managing of open telemetry collector I also enjoy cycling and tasting whiskeys, but not at the same time. So if you are based out of Hyderabad, India would definitely love to Join for cycling and then tasting whiskeys. So yeah, you can reach out to me in Twitter or and slack So let's see what are the data types in observability? So first comes the metric. So metric is all about trying to understand a state of something Using the metric name and the value. So here you can see that metric denotes go GC duration seconds This is the metric name and the value is the float value 0.0034. So it can be anything like you want to Get the runtime state of number of core routines running in your application or Go GC duration second as per this metric. How much time is your garbage collector taking? to process the garbage collection or you can see number of threads and Memory being utilized. So metric is all about capturing a runtime state of something in your application So what are traces so I love traces, so I should be a bit biased towards traces So this is the image you see here is the eager UI. So it's the visualization of a trace So if you need to define a trace or trace is nothing but a request life cycle basically So how does your request? Path flow in your set of microservices and function calls in a particular service So if you think of an e-commerce site like Amazon if you're ordering something So I just the request goes to the cart service payment gateway and then you will get an acknowledgement saying that the order is successfully placed So this involves the request traveling into multiple microservices and in each microservice It needs to travel into Multiple functions. So with trace you can understand what's the request life cycle is like and whereas the Where is the most amount of your time is being spent like the added latency throughput in each and every part by just looking at a Trace so here you can see that the duration for this trace to happen was 18.54 milliseconds and Total services this trace is spanned as three services and the depth is eight, which means there are total eight spans So here you can see this Bars basically it's like a parent span and you have a child span So you can see there is a span which is consuming 12.94 milliseconds So if you just hover or click on this Span this also captures some metadata to understand what is the metadata in this particular request life cycle To analyze further into a trace. So these are trace. These are traces in the high level So what are logs? So I think logs need not need any special introduction because that's the first place where Any kind of inch instrumentation or observability starts from today. So these are Logs from the open telemetry collector. So basically logs help you to understand the current Action being performed in your application. So you can log some error Debug logs info logs to understand what's exactly is happening In your log. So this this is the first step to your observability And today will not be discussing more about logs our primary focus will be on metrics and traces So let's get back to the title the full observability stack. Yes, I mean so I mean in the title Deploy a full CNC based observability stack. So yes, it's the full observability stack in under five minutes. Yes Am I serious? Yes, I'm serious about it. So let's see what we have in this presentation. So It supports complete metrics. It supports complete traces and logs just needs an external storage system. So you can compliment tops By adding your preferred logging solution to store the logs And introducing tops. So the top stands for the observability stack for Kubernetes So the definition of top just tops is a CLI tool and a Helm chart that aims to make it as easy as possible to install a full observability stack in your Kubernetes cluster. So You can either use the CLI tool or the Helm chart to install this observability stack So it's it's totally your preference and if you want to check out more about tops You can check out and this link github.com slash timescale slash tops And let's discuss about what does this tops include. So we'll just go layer by layer. So first, let's discuss the exposition layer So in observability, we definitely need some kind of components which which extract some kind of metrics from the targeted Resources, so here we have node exporter to basically scrape the node metrics from all the nodes running in your Kubernetes cluster But basically they're the cubelets there and you have cube state metrics to scrape the Kubernetes metrics from the cube API server. So this gives you The overall understanding on the state of your Kubernetes cluster So by default tops includes both node exporter and cube state metrics for you out of the box And visualization layer so tops includes grafana. So you can use grafana for Visualizing anything like metrics logs traces anything and you can use multiple data sources to query with your preferred language like sql promql or some filtering mechanism what agar offers. So we also deploy agar. So if you're so agar is the CNCF based tracing solution. So from agar, we just use the agar query to visualize the traces. So if you have already using agar This is already covered for you. So you can just use the agar for the visualization as well for traces and collection so We have seen the exposition. How does the specific targeted metrics get extracted from from the nodes and the cube API server and we have seen the visualization layer for grafana and using agar And here we have the collection layer. How does the data gets collected? So firstly, we have Prometheus. So Prometheus is a graduated CNCF project for monitoring and alerting system for your services. So I will Will not go deep into this project, but I hope you are already aware of it If you are new to Prometheus and open telemetry, you should definitely check it out because The observability today without the stools is close to impossible. I should say so, yeah So Prometheus basically Helps you to scrape the metrics from the targets like node exporter cube state metrics or Your custom business applications, which you have instrumented using Prometheus. So That basically scrapes the metrics from them or you can also for Push the metrics to the Prometheus and Prometheus also supports the remote right Back end. So the from Prometheus. We do remote right to the prom scale, which we'll be seeing in the other slides So it's basically Prometheus is like supports scraping the metrics from the targets. Also, it also supports in-house storage engines. So Prometheus also supports storing it Within Prometheus or if you would like to store it for longer durations aggregate from different Prometheus instances The metrics coming in so you can use the remote right systems like prom scale and the coming to the open telemetry So open telemetry is the second active project in the CNCF. So after kubernetes. So open telemetry Is like it includes so many projects like the instrumentation layer the collector layer And everything and even the open telemetry operator. So here though when I say open telemetry it's the open telemetry collector, which means if you have instrumented your application with tracing client libraries, you can just forward the traces to the open telemetry You can configure the receivers like agar zipkin or otlp. So all kind of Tracing instrumentations can can be connected to the open telemetry collector So all the traces you can forward to the open telemetry and in open telemetry can also configure the exporter So where we will configure otlp exporter to forward the traces from open telemetry collector to the prom scale so open telemetry collector doesn't support storing of Traces for future visualization and analysis. So this definitely needs a back end to store the traces And here comes the The prom scale which is like the The powerful agent of the observability I should say the powerful component of the observability because It helps you to process the data for long and also helps you for long term storage So in observability, it's like for every 5 to 10 seconds the data just keeps coming in and gets ingested And you need to process that data and visualize it. So when I mean process Like for example, you want to down sample it or you want to correlate it So all this data sets in the prom scale. So this is the storage layer for all the observability Data we are discussing in this presentation so detailed overview This stack basically we have discussed the exposition layer visualization layer collection layer and the storage layer So here it's we are just listing everything in one slide for the easier understanding. So the complete Tops basically the helm chart is a combination of multiple helm charts And here you can see we are using q prometheus the kubernetes monitoring stack basically offered by the prometheus community It includes prometheus alert prometheus to collect the matrix alert manager to fire the alerts. So in q prometheus the alert manager Comes with default alerting rules for your kubernetes cluster and node exporter Which means if there are any incidents or anomalies or something which is Which causes issues in your cluster will be automatically alerted To the alert manager using the out of the box alerting rules offered by cube prometheus And there is grafana to visualize what's going on For visualization and also for alerting you can also alert through grafana And we have node exporter to export the matrix from your nodes And cube state matrix to get matrix from your kubernetes api server And prometheus operator is to manage the life cycle of prometheus and the alert manager. So it uses the Custom resource definitions to deploy manage prometheus and alert manager. So if you are using prometheus in kubernetes cluster You should definitely check out cube prometheus because it comes with prometheus operator It eases the management of prometheus for you in the kubernetes world And the prom scale to store matrix and traces for long term A long term storage and allows you to analyze analyze the data Which is stored using both promcurel and sql and prom lens is a tool to build and analyze promcurel queries with ease so basically Many users may not be aware of promcurel or will have hard time building some complex queries with promcurel So prom lens is a tool which helps you to build this queries with much more ease so by default tops includes prom lens to make your Life easier while working with promcurel queries And we have open telemetry operator to manage the life cycle of open telemetry collector So same as the prometheus operator The prom open telemetry operator also manages the open telemetry collector using the custom resources. So So it just makes the installation managing upgrading everything easier with the operator And in open telemetry operator recently we have also added support for auto instrumentation Which means you just create the instrumentation cr and you just need to add annotations for your deployment saying that inject java True and then automatically the open telemetry operator injects the sidecar for your java node gs and python applications So without any code changes you can achieve the auto instrumentation to your Applications using the open telemetry operator auto instrumentation feature. So you should definitely check it out It's really really interesting and you can just get the observability traces like the observability For your business applications without Zero code changes. So the traces just come are just exposed and the sidecar just forwards them to the open telemetry collector And we have eager quarry to visualize the traces. So You can either use grafana or eager quarry. It's just a preference and a choice to visualize traces And what is prom scale? So let's see The complete overview of prom scale. So the prom scale is an observability back in powered by sql So it supports unparalleled insights when I say unparalleled inside It uses one database to store all the observability data here. It's the metrics and traces You can also store your business data in the same system Which means you have all the data sitting in one database and you can correlate all this different data types At a specific window In time. So it just gives you the ability to Do all kind of analytics and processing in a specific Time window. So sql just offers anything. So the sky is the limit for you If you're using sql as the quarry language and it has a proven foundation So it's built on petabyte scale foundation of timescale db and postgres sql Which means it also supports advanced database features like high availability Replication and compression and many more. So you're fully covered for the reliability of the database layer And it's easy to get started and use. So trust me. This is the major uh differentiator for the prom scale because You need not worry about how to run manage this observability stack or the Long-term storage system. Whereas with other solution, you should be running tens of microservices installation upgrade scaling them is like very It just causes so much of problem for you. You should have a dedicated sre team Whereas with prom scale, all you have to do is Just run a prom scale connector a stateless one and the database itself So and it's also easy to integrate with grafana prometheus open telemetry and all the tools you know and love Because today we support all the major observability open source solutions with prom scale And here comes the prom scale architecture. So on the left, you can see there is open Prometheus which can do remote right and remote read from the prom scale connector And we have open telemetry Which uses the open telemetry collector which uses the otl pgrpc endpoint to ingest the traces to the prom scale connector If you're not using open telemetry collector, uh, you can directly also instrument your application using open telemetry client libraries And directly forward the traces from your application to the prom scale connector That's totally possible and coming to the prom scale itself So prom scale is a combination of two components One is the prom scale connector which is stateless and the other one is the timescale db database So all you need is just two components running and you're fully covered for the storage of all your observability data You need not need multiple systems or two different Stacks to manage traces metrics and everything so in prom scale all the data sets in one System and it's prom scale and coming to the visualization layer We have eager ui to visualize the traces from the prom scale connector and we have graffana to query metrics using prom scale from the prom scale connector and On the side note prom scale connector has hundred percent compatibility support for prom scale queries and you can use graffana to query Using sql from directly from the timescale db So you can use sql for all the data which is stored in the timescale db and any tool that speaks sql should Just work out of the box with the timescale db. So this is the visualization layer for prom scale And if you want to do Check out more on the prom scale feel free to Open this link tsdb.co slash prom scale And let's discuss about what are the features offered by prom scale in the high level. So these are Uh, just the top level features. We have many more getting cooked and developed in the prom scale today And this just this list just grows in the coming days. So we have full sql support and analytic support on your observability data Which means all the data which you are sending to prom scale can be queried with Full sql support and you can also do analytics on them using the analytical functions offered by timescale db And we the storage support for metrics and traces. So as I told For other solutions in the market or the other open source solutions, it's about you need two different systems to Store and process metrics and for traces. So whereas with prom scale, all you need is one system the prom scale itself for Storing metrics and traces. So it's easy to run and manage for you. And we also Offer a high availability for prom scale. So with prometheus, you can just use the labels of prometheus to leverage high availability from prom scale and even multi tenancy is Offered in prom scale. So you can use the tenant IDs if you have multiple prometheus instances, which are sending Metrics to the prom scale. You can just attach them with the tenant ID and the data is just separated out between tenants We also support exemplars in prom scale So if you have instrumented exemplars with prometheus client client libraries in your application so Just prometheus scrapes the exemplars and does a remote write to the prom scale and we just store it for the future analysis And the continuous aggregates for prom scale For metrics, which means if you are already using timescale db, you should already be knowing continuous aggregates So continuous aggregates are the down sampling of of timescale db So it's much more than down sampling much more accurate and all so it's We do support continuous aggregates for the metrics stored in prom scale So this is another interesting feature per metric retention. So many users Love it. Basically if you have hundreds of metrics being stored in prom scale So you are only interested in few metrics to be stored for one year And you want the other metrics to be stored for 90 days or 120 days and 180 days based on your preference So you can just apply per metric retention on on per metric basis So the metrics which are interested into store for long term gets stored for a long period time And the other metrics based on the retention policies gets dropped based on your preference. So That's totally possible with prom scale and you can also ingest your own time series data alongside prometheus data So if you have time series data from your legacy monitoring solutions or from other sources All you have to do is change this time series data into a json Schema which is offered by prom scale. It's in prom scale docs. So all you have to do is convert the data into the prom scale json Streaming request format and all you have to do is just do a post request to prom scale and all this time series data of yours will be stored alongside prometheus data Which means it gives you a power of querying this time series data using both promql and sql So that's totally possible. So if you have any legacy systems and metrics Should just try out this grpc streaming endpoint of prom scale And these is the internals of top. So we have seen what's prom scale is So now let's get back to the top side of the house So we have top cli which basically installs the top's helm chart into the covernetus cluster and top's helm chart is the Combination of all this helm charts the cube prometheus prom scale timescale db and open telemetry operator helm chart So tops is basically a super helm chart which combines all this and helm charts under the hood And this is the tops architecture It looks complex But trust me are just one command away from deploying the stack and configuring all this Components. So the tops does all the heavy lifting for you It's a Pre-configured and pre-baked for you. All you have to do is just deploy it and start using the stack And here comes the cube prometheus tag. You see the box here It includes cube state matrix node exporter alert manager prometheus and the prometheus operator to manage the cube prometheus tag the grafano And here comes the prom scale itself the prom scale connector and the timescale db And we have prom lens to help you build prom kill queries And here comes the tracing stack. We have otel operator The otel operator has a dependency on cert manager So we do deploy cert manager for open telemetry operator And here is the open telemetry collector and agar quarry to visualize the traces in agar. So if you are Business applications are instrumented for with traces. All you have to do is Configure the open telemetry collector as the endpoint forward the traces And your services will forward the traces to the otel collector and the otel collector will forward them to the prom scale And here comes the prometheus. So the prometheus also scrapes The slash matrix endpoints from all your services, which means it just scrapes all the Metrics from your applications and the matrix from prometheus gets forwarded to the prom scale So this is how the stack works and it's all pre-configured for you And it's the demo time Let's pray for the demo gods for the demo to work successfully So here is the kubernetes cluster. So i'm just doing kubectl get pods Yeah, I just have the cube system pod. So the cluster is just empty And let's see The top cli what top cla has to offer for us and tops basically has the subcommands as grafana to do grafana operations Like grafana get password change password port forwarding and helm basically to do helm operations like show values for Your tops helm chart as the as the core component of tops architecture is helms So we do have some helm operations for you to have ease ease with dealing with top cli And we do support installation of observability stack using the install command and we have agar to perform agar operations Like port forward and we have metrics to do metric operations like applying per metric retention directly from the cli and configuring the chunk interval for the timescale db for metrics and the port forwarding for timescale db prom scale prom lens grafana Prometheus agar the local host so all the components which are deployed by tops can be Port forwarded to your local host using this port forward sub command and we have prometheus for prometheus operation prom lens prom scale and timescale db for timescale db operation with timescale db sub command You can do get password change password of the database and you can also do connect which means you can just Get into the psql prom right from your shell So you don't need to ssh into the port of the database all you have to do is just do tops Timescale db connect and it just connects to the database for you. So how cool is that? so you don't need to Find the timescale db Pod and try to understand what is the secret it's mounted to and then you need to capture the secret which is Password string in the secret which is base 64 encoded you need to decode it and then you need to do psq You need to take you need to do execute into the port and then apply the password. So it's a bit cumbersome. So all you have to do is Just do tops timescale db connect and it just gets connected and it's the same with Other commands as well. It just makes your life easier while managing the observability stack And let's install the stack for you Okay, let me do cubectl get here. I just wanted to check is my cluster in the state. I expect it to be So, yes, that's the way I want it to be And let's do tops install So i'm just doing tops install iphone iphone tracing Because today the tracing support in prom scale is in beta. So in In few weeks will be announcing tracing g which means by default tops should also support Installation all the tracing components installation. So at the moment, you need to explicitly enable it by entering iphone iphone tracing flag So i'm just doing enter So the installation is running my fingers crossed for the demo to work And it asks you for the confirmation of the cert manager is required to deploy open telemetry Do you want to install the cert manager? So as I told the open telemetry operator has a dependency with cert manager And as the cert manager doesn't exist in the cluster, it's asking for a confirmation If the cert manager already exists in the cluster, it just skips the installation of the cert manager So i'm just doing yes and I just process the installation. So in the meantime, we can just see from my previous installation Installations, what does the stack actually contain? So here you can see that I have another stack which was running. So I'll just show you what are the components which the stack deploys by the time it gets deployed. So Now the time is 6 14 pm in my time. So let's see Does the stack gets deployed in less than five minutes as the title says? So here we have time scale db the pod For the database and we have prompt scale and we have the prompt lens and we have the prometheus node exporter So as this is the three node cluster the node exporter is deployed as the demon set So you have three pods as the node exporter and this is the open telemetry collector and the kubestate matrix prometheus operator eager and you have grafana db to Pre-configure the dashboards and users in the grafana and this is the grafana Pod itself and we have some demo services to show to generate some traces for this demo And we have alert manager here and this is the cert manager which is deployed for the open telemetry operator So the installation is going on till then what we'll do is we'll just I just wanted to show you a few dashboards So just give me a second So the installation is happening that's waiting for the pods to get started Okay So I have another environment with all the dashboards, which I wanted to show you using sql command So these dashboards are not Pre-configured in the tops at the moment, but in the future releases we will also pre-configure this dashboards for the For the For the tops by default out of the box, you'll have this dashboards configured for you So before we jump into this dashboards, I want to check the state of the cluster. So it's getting installed So we'll give a few more minutes for the stack to be up and running So in the meantime, let's just check What are this dashboards? So I have prom scale with sql here and you can see Basically, these are the dashboards built on top of traces. So we have traces coming from hipster shop demo applications So these traces are stored into the prom scale and this We are using sql to query all this Data on top of traces for you So you can see the p99 latency for all the traces in an average is 173 milliseconds And the throughput is 6.82 requests per second and there is no error rate interestingly, which is great and we have p99 response time here as As p99 response time for each service you see here We have a recommendation service currency service email service and if you just hover on it you can see that the cart service has some Like the response time is ranging to two seconds, which is not good So this is the p99 response time and here you can see the heat map for the trace duration for all the traces aggregated So this is one dashboard. I wanted to show you. So in the meantime, we'll just check the status of the stack so yeah So at 6 18 now so you can think it as four minutes from the time we have deployed it Deployed the stack. So let's do kibectl get pods. So still the deployment as The observability stack is deployed in four minutes. So the grafana is in crash loop backoff. Let's give it 10 more seconds for it to Start up and running. It's dependent on the timescale db But in the meantime, we can see that we have timescale db deployed prom scale prom lens The prometheus node exporter the open telemetry collector the cube state matrix and the prometheus operator So yeah, you now you can see that tops grafana, uh, the pod is up and running Yep, so we'll just use been tops grafana get password command to get the password from the using the top cli And you can see the password is here And we can just do port forward grafana port forward. So the grafana is Port forwarded to the local host 8080. So let's copy this password, which is randomly generated by tops and let's do local host 80 80 Okay So we have admin and then the password which we copied And we are logged in. So this is the grafana and we have preconfigured dashboards In tops using the cube prometheus, which internally uses the kubernetes mixins So let's navigate through this dashboard. So these are the dashboards which are preconfigured by cube prometheus So you can just get into node exporter nodes to understand what is the matrix So as the stack is just uh, five minutes old. So the data is just getting Filled in so here you can see all the cpu usage load average for the node So let's give it some time and in the meantime, we can check the data sources. So in tops By default the data sources are configured for you out of the box here You see the prometheus data source which is configured to use prom scale. So here we have prom scale To query promcule queries as the prometheus data source and we have prom scale sql Which is the postgres sql data source to query using sql from the timescale db and you have prom scale tracing So, uh, it's an agar data source to query traces from prom scale So these are the three data sources which are preconfigured for you using tops and now We can just Go to the other date Prometheus dashboard to understand what is the data So here you can see that these are the prometheus stats uptime and everything for those things and here you can see the target sink target So it says it has more than 750 targets at the moment So it's 810 to be precise and average scrape interval is one into one minute and the scrape failures There is no data yet and it's appending samples head series. So the head series is 59 000 at the moment So these are some metrics. So all these dashboards are out of the box available for you if you are using tops And this is the grafana visualization from the tops which we have deployed and how you have also seen how we have the captured the password using the tops here like and let's get back to the SQL dashboards which have built for this demo in another environment so Here you can see that The service performances. So even these dashboards are built on top of traces using SQL as the query language So coming to the first panel here, you see operations with highest error rate in last 24 hours So this is the service name and this is the operation. So in front end service, the checkout service is having 2121 spans with errors And the total spans are 1083 a k total spans and the error rate is 55,000 55 percent 55.4 percent the error rate is so each and every operation shows the error rates per api Here per operation and the operations with highest rate errors in the last 24 hours. So This is what the data is So the front end card checkout Has the 55.5 percent errors as we have seen here. It's the same error rate For the same operation here and you can see all the list of apis which have the error rates and here you can see the slowest operation in the last 24 hours So the front end service has p99 latency of 28.9 seconds, which is not good and Here we also have the p999 latency. It's the same again and for the Product id also. It's the same. So the p99 has almost 30 seconds. So which is not good And here the slowest operations in the last 24 hours. So the health check for card g rpc service Is taking approximately 1 point like 1.64 seconds. So this is the Slash check is taking this much time. So which is not good So it just gives you all this kind of anomalies and for example We can just jump into the SQL query we use to build this dashboard. It's as easy as that So this is a nested SQL query. You're just doing select and then you're applying to care Like you need to do some small casting of for data to visualize in Grafana and there's another nested query here. So All it's just 10 lines of SQL query for you to get this kind of insights And let's jump into another interesting Dashboards I have here to demo and the service dependencies So you want to understand what what are the dependencies for a service? So you have a client service as front-end checkout service and they are dependent on ad service here And basically the front-end client is calling the ad service 2715 times and the specific apis get ads So the total execution time is 1.03 seconds. So how cool is it? You just know the service dependencies for each and every application by by processing the traces And here you can see another interesting thing the front-end is calling the product catalog 20,000 requests. So this is definitely not great. So you should you should definitely dig into it by looking at the number of requests the product catalog service is getting So it just gives you this deeper insights into your applications. How is how many invocations are happening for API and what does the client the source where this requests are getting Envoked from and this is the heat map of the trace duration which I've seen in other Other dashboard and here is the slowest traces. So here you can see the start time the trace ID The service name and the operation. So Basically the slowest trace duration is from the card service and the duration is 1.98 seconds And these are the resource tags. So we'll just see what is the SQL query used to get this So the SQL query is not more than eight lines. I should say here. So it's just the select So you're doing select of start time and replacing the trace ID special characters with an empty string That's the trace ID here and we are doing the service name selection span name duration second And we're converting the resource tags into the json b for easier visualization here in the table And we are doing from so there's a span table. We are just querying all this data from the span table where parent span ID is null. So basically a trace is nothing but a parent span itself when we when we understand the trace data model. So we are just saying that capture all the parents fans because they denote the traces and just limit them for 100. So these are the top 100 slowest traces you see in this table. So it's as easy as you see it. So the SQL way of querying your observability is like a new approach and it's really easy and the sky is the limit. You can correlate and you can build this kind of dashboards as per your requirements. So it's just easy and it just offers tremendous value for you to understand your services And yep, these are the three dashboards I wanted to show you and these all these dashboards are again a note that are built on top of traces using SQL as the query language and they are querying directly from the timescale DB. So we can just check the dashboard. So now the data is coming in. You can see that as the stack is running for last 10 minutes here. So the data is just filling in from the stack and to make sure that prom scale the tops which we have installed is it ingesting the metrics and traces all you have to do is just do prom scale and check the logs of it. So here you can see so you can here you can see that it's ingesting basically 4500 samples per second. So these are the info logs of the prom scale stating that hey, this is the ingestion of samples I'm doing at the moment. And just as the final demo will just deploy the sample microservices so that we can see how does the how does the traces okay traces apps. Okay, so I'm just deploying a bunch of microservices which basically forward emit the traces to the open telemetry collector and the open telemetry collector forwards this traces to the prom scale. And then you can see in the prom scale logs that it's ingesting x number of samples per second. So this might take a couple of minutes as the pods needs to get up and running. So I think they're almost up and running. So let's do the logs. Just let's tell the logs of prom scale. So here you can see that it says it's already ingesting spans like five spans eight spans per second. So even the traces are getting ingested into the prom scale now as we just deployed the demo microservices here you can see 2000 samples and all the samples and spans which are getting ingested. So here we see as high as 231 spans getting ingested here. So yeah, 231 span. So a prom scale is now ingesting both the metrics and traces for you. So this is the demo I had and thanks to the demo gods as my demo just worked as expected. And to learn more you can find all those resources, so slides and resources. So this link is not corrected. It's my bad. Sorry for that. So it's a prom con talk link I just added here. I should replace this. So what you can find the slides in the description. I'll just share the slides within the CNCF webinar. And the observability stack for Kubernetes can be found. Basically you can get find the GitHub report tsdb.co slash tops GitHub and prom scale GitHub report slash timescale slash prom scale. And you can also find the prom scale blog post in this link. And if you are interested to discuss more about tops in prom scale joiners and timescale DB slack hash prom scale. Currently we are also rethinking the tops architecture to support the tops and infrastructure as code. So each enterprise has their own way of deploying components into their infrastructure. So if you have any thoughts, suggestions, feedback on tops, you should definitely reach out to us and slack. We would love to have a quick call with you to understand your use cases and requirement to better shape the future of tops. So tops will have some architectural revamping and some changes in the near future which should make it even more powerful and easier to deploy the observability stack. So now you just see it as one command away. But now we will expand this ease of deploying into different architectures and infrastructures like the GitOps infrastructure is code in different ways. So we're still exploring that side of tops. So your feedback is definitely valuable for us. You can reach out to us and timescale DB slack. So I'll just show you quickly the tops GitHub repo. So if you are interested to check it out, do check out the tops GitHub. So here we have the tops project and quick start getting guides how you can install the CLA and get the stack up and running. It's the same command which we have ran to get the tops up and running. And we have prom scale here. So here is the prom scale repo for you. So you can learn more about prom scale here and we do have timescale docs. So if you are interested in getting started with prom scale, you should definitely check out the prom scale docs on the timescale DB website. So it gives you more details about prom scale architecture and also some high level information on how is the schema designed in the relational database for observability data and installing tops also has all kinds of examples for you. And just a heads up that we recently launched the prom scale logo and I'm very excited about this logo. So I just wanted to share it with you. So you'll be seeing more prom scale and the logo and all the future talks, NCNCF webinars and other platforms. And you can also check out the timescale, the blog post from the observability team at timescale and timescale website. So basically the timescale website holds all interesting blog posts on the timescale DB and observability. So you can also check out some crypto related blog posts. How is that crypto data stored in timescale DB? And if you are interested into the observability, so you need to check out the filter observability and blog. So here we recently published how to turn timescale cloud into observability back in with prom scale should definitely check out. So it says about how you can install tops and prom scale by powering all data into the timescale cloud. So all the storage layer will be offloaded from your Kubernetes cluster to the timescale cloud. So it just works out of the box for you. So this is the architecture. So you will have all the tops components in your cluster, the prom scale connector, but just the database will be offloaded to the timescale cloud and it will be running at timescale cloud. So it offers all the major features like ease of operations for you scaling the compute desk and everything. So timescale DB has some amazing features. So we do have 30 day free trial if you are interested to check out the timescale cloud for storing all the observability data. And do check out this blog post if you are interested in getting started with tops and timescale cloud. And there are other blog posts here as well if you are interested like how to down sample Prometheus metrics and prom scale and what are traces and how SQL helps you in getting the deeper insights from your traces. It discusses about the dashboards which I have demoed today and simplifying the Prometheus monitoring for your entire organization using prom scale. And there are many blog posts like that here for the observability. So do check out if you are interested. And we are hiring. So if you are interested to join timescale, both in the timescale DB, the database side of the house, the timescale cloud or in the observability group. So feel free to check out our carriers or reach out to us and Slack. So we would love to have you as a part of our timescale team. And thank you. See you in the future talks from prom scale and timescale.