 I'd like to thank everyone who's joining us today. Welcome to today's CNCF webinar, continuous profiling, go applications running on Kubernetes. I'm Christian Jans, cloud consultant at level 25 and CNCF ambassador. I'll be moderating today's webinar and we would like to welcome our presenter today, John Luca Arbezaros, SRE at InfluxDB. A few house items before we get started during the webinar, you are not able to talk as an attendee. There is a QA box at the bottom of your screen. Please feel free to drop your questions in there and we will get to as many as we can, either during or at the end. So bear in mind, we can't do it all at once. This is an official webinar of the CNCF and as such is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that Code of Conduct. Basically, please be respectful to all your four participants and present us. Please also note that the recording and slides will be posted later today to the CNCF webinar page at cncf.io forward slash webinars. And with that, I'll hand it over to John Luca to kick off today's presentation. Hello everyone, and thank you, Christian, for your kind introduction. And yeah, thank you for being here. I will share my journey with continuous profiling in Go. So everything that we share is open source and I mean, we will see how I set up the continuous profiling infrastructure at Influx Data. And why? So if you are a Go developer, you probably saw this output already. If you are not, you can think about that like a profile. So almost every programming languages has one. And in Go, you can look into the HTTP endpoint. So it would be easy to scrape or easy to interact with from the outside. And speaking with like having the ability to take a snapshot from a running application is very important and even more when you have containers or when you have pods because they tend to go and come very quickly. So you have to take a picture of the runtime to figure out what's going on or to troubleshoot what broke or what didn't work, I suppose, yesterday or last month or also to do easy comparison. So a profile usually exposed like the number of allocation that your runtime is doing, the number of Go teams, the heap, the mutex and you can also do tracing in the runtime as well. So Pprof is the format that Go use and it's very, very powerful. And you can also, you can even instrument your code to build profiles and you can extend what a profile can expose. So for example, in InfluxDB that is the time series database open source that we develop at Influx data, we also place inside the profile the last like 10 queries that you run or the last, you know, a buffer of the logs that the InfluxDB demon shipped because this helps us a lot when doing troubleshooting. So when there are performance degradation or issues we just ask the community to send to share their profiles. And in there we have everything that we need to look at to reproduce or to figure out what's going on in runtime. So Go has runtime, there is a binary that runs and this is a very good way to introspect the runtime itself. And it's easy because it is a tool that it shipped with Go itself. So you don't have to install anything, it just work when you install the language. So as I told you, it can be an HTP endpoint. So in this example, I am using a Go tool binary it's called PIPROF and I assign a URL to it. And this URL is my Go application that is running. And as you can see, you see the profiles because that command fetches the profile, it stores it locally as you can see and that you can read it in various format. This is the textual format, so it's a string but you can also print like a graph like this one. So profiles are very good because you can introspect the chain of functions that your runtime runs. So you can see how many CPUs they are using and you have a very, very deep visibility like function based on who is blocking the mutex or who has the Go routines and so on. So it's very, very powerful. And not only to figure out what's going on now, but also to figure out what I was gonna do like post-mortem promotion. So as I told you, it can be an HTP endpoint. It doesn't need to be one, but it's very common to have it exposed over the port like 6060. And from the Go documentation, this is how you do it. So you import the net-http-proff package and you start a server. By default, it use the global mutex. So it is in the main HTTP server when you started. So you don't have to do any routing or something like that. You can do that, but you can also use the full one. So now that we are all on the same page, let's say something about myself. I'm John Luca. I'm a CLCF ambassador as well. And I work as a software engineer, social reliability, let's say at InfoStata. So I'm an S3 officially. And you can find me on the internet like Janar. And I have a website where I try to blog like with a good cadence and a good schedule. And you can find me on Twitter when I am, when you can, it's the best way to follow my rambles and like them. And now that we are all at home, it's good to have some chat. So see you there. When I'm not, I mean, I like to make mutex. So I like to do scripts. I like to do whatever that makes like automation easy for everybody. And when I'm not hacking, I grow my vegetables. So in Italy, I'm basically, and it's actually like very cold now since like a week. So, but I'm very, I can't wait to get back to my garden. And I like to travel for final work, but as I said, no, we got all quite here. So it's good to talk with you. So I'm used to have a look at chat and the Q and A as well. So if you have any question that shows your mind, just share there. If it's the case, I will try to answer as soon as possible or later if it's better. So yeah, I mean, profiles and the way we look at applications changed because now we have clouds mainly because our environment is way more dynamic. Like VMs come and go, containers come and go, even more dynamically. So it's very hard to call stuff with their name because their host name changes, the container name changes, the pod name change. It's very hard. And you have to in some way freeze the, take a screenshot of your application time to time and in order for you to have a look at what's going on. So that's why we use to centralize logs or pull metrics with Prometheus and things like that because we don't have the time actually to go as a sage and see what's going on. Everything moves too quickly. So you have to take snapshots and look at them on time. Thank you, clouds. And yeah, usually application makes trouble in production. That's also important to remember. So everything locally may run smoothly but as soon as it gets to production that's where the trouble starts. And that's where like troubles matters even more. So how a developer takes a profile from a local application, it's kind of easy. We saw it already. So you have go tools and you can go to be graph and you download the tar or you fetch the entry point and from there you're done. You have the file locally in your laptop and you can use it, you can compare it as well and so on. But how developers do that in production is a mystery and usually changed company by company. And that's a problem because usually what's happened is that a developer asks for somebody that knows where the IPs are or how to get like QBCTL working to connect to the pod and get the profiles. And that's not usually what the person that does that is happy to do. So we are not babysitters for software engineer but I have to say something that it's very important if your developers ask you has an SRE or has a hopes person to do something that they are supposed to do. It's helpful because it's on us to create same workshop workflow that developers can use to achieve their goal in a very effective way. So if you get pinged by engineers because they don't know how to get profiles or how to get logs or how to get a trace that's a big like ring. It's a big others that you should change what you're doing and improve the workflow to bring the developer closer to where they can find useful information. So yeah, another very crucial aspect of a profile is that you never know when you're gonna need one and you never know like how to, you will use the information that are there. So it's, they really behaves a lot like metrics. You may have a ton of them that you never use but as soon as you remove them or delete them you will look for them. So it's a lot of information that we have to figure out the way to use as best as we can because it's pricey to store all those profiles or those metrics or those logs. So let's summarize the issue. So developers are the stakeholders for profiles because they are usually the person that use them because they have to figure out because it's their language because it's their application. They know the function names and they don't know what they're going to use. Production is not as comfortable as it should be usually. And there are good reasons for that like there is security, there is compliance. So it's all that those are all good reasons to be secure but also you should be friendly with the developers. Otherwise you will get a lot of noise to yourself and that's not fun. And you, as I say for files you never know when you're gonna need them just has metrics and logs. So you may pass like two weeks not looking at them but at some point you wish, somebody will tell you that your application is running high on CPU and you will have to look at them at some point. Even back in time. So they're very good for post-mortem. Cloud and Kubernetes increase a lot of the amount of noise. We have the complexity that the dynamicity of our environment. So it means that taking the snapshot of our application is crucial but also very hard. And yeah, we have way more binaries because we do microservices. We are way more pods. So even if we have a micros, one service that is like a monolith it's still way more redundant than what it used and replicated it was used to be a few years ago. So all this movement like increased complexity. Yeah, as I said, we have exactly the same problem with logs and metrics and that's why we collect them with time series database, we store them and we try to get value out of them when we need to troubleshoot an issue. And to be fair profiles are just a different way to aggregate point in time but the point comes from runtime and usually they are grouped by function name but profiles are just a bunch of time series data aggregating in a way that will look like a profile. So how are we gonna solve all those issues and how are we gonna make our life easier making developer happy that is usually happier that is usually like how my life goes. So spoiler alert, the solution is part of the title of the talk. Yeah. For this log that we collect them every like 10 seconds, every seconds and we push them to a centralized place with our Cinch or Flannel or whatever we can do the same for profiles. That's it. So follow me. You have all your applications that are running they do amazing stuff, they sell a lot they get a lot of money from your customers and you have a collector that gets information from the application itself. It will make pool slash metrics so it would maybe a primitive instances or it would be another exporter or it would be another collector like telegraph, whatever but with some cadence like some interval they will take the snapshot of the application in form of metrics and they will push it into a central repository. For logs we may be elastic search we may be cloud trade or whatever for time series would be in Bixby would be Prometheus or would be like Honeycomb or New Relic, whatever you have for profiles, we have a tool open source it's called Profefe that does like that just that. And also you interact with those outcome via API and that's very crucial and super important because it makes, as you can see developers don't need to go and look for the application anymore. So there is not an arrow that goes from the smiley face to the application because there is an API and everything is centralized. So nobody cares about where how the network looks like nobody cares about how to SSH anymore nobody cares about the QCCN anymore they just go to the API and they take the profile via HTTP endpoint and they are very good with that so they can write whatever automation they like, do it. So you can find like the code that I'm speaking about the Profefe code in a GitHub organization it's called github.com slash Profefe and the way I set up the infrastructure looks like this one. So there is all the application in the left running girl code and they are running as a pod in Kubernetes there are a lot of them running and they expose the HTTP problem as I showed you before. And there is a current job that is another pod that runs every once in a while my case every 10 seconds and it gets all the list of pods and it will ping the pod IP and it will grab the profile in form of a tar archive and it will push the tar archive into the Profefe collector and the Profefe collector will store it in the NDB. We use, we are on AWS and we use S3 but Profefe has an abstraction layer so you can store them even in other objects store or locale or whatever you like. We probably do the Google Cloud integration very soon because we are moving cloud as well. So right now we use this infrastructure on AWS. And as I said, just to be clear as a like influx data does a lot of open source and that's why I'm contributing to Profefe but this is not a tool that comes from inclusive. Like I got asked to implement a continuous profile integration and I started looking around in the open source land and I thought this and I discovered this time project that I liked since day one and that's why I'm here. So that's it. And the developer like interacts with the Profefe collector to get what they need because it's an API. There is another way you can use that you can set up the infrastructure and as you can see, there is no cron job here because actually you can embed a library that inside your application and the application itself will take a profile from itself and it will push the outcome to the Profefe collector. This, you have to choose the one that you like most. I think, I will tell you why we used the other one so we are using the one with the cron job, not this one but this one is very flexible because you are inside your application so you can take a profile the best time you think it has to be taken. So I think at some point it will be good to use both so you will take a profile every once in a while because you need one and maybe at some particular stage in your application maybe during the shutdown, you will take a profile because it's a good information to have. Usually when you crash, if it gets a panic, you just recover from the panic, get the profile and push the profile before die. So you will be sure to have one that matters. But yeah, but the flow is the same. So your application has a collector and push to the Profefe API and you push it into storage. That's it. Your developers always use the Profefe collector because they have a nice API. So yeah, the pull base for the solution is the one that we are currently using because for our environment, it was the easiest one to start with. All our application, all our services already exposed Pprof internally. So for us, it was just a matter of like having a collector smart enough to leverage the Kubernetes API. And yeah, that's it. So Kubernetes, I have to say, made like this process very easy. When I started to use Profefe, the Kubernetes integration wasn't there. So that's the biggest contribution we made to the project. We brought a bridge between Profefe and Kubernetes itself. So yeah, I see Kubernetes has a bit of tasks. So it makes a lot of noise and a lot of troubles and it manages all those applications. And as you can see, the application are not well organized because they just wouldn't go as Kubernetes usually does. And yeah, the Kubernetes has APIs and APIs can be used to discover applications and you can get the APIs of the application. So if you have the API and you know the port, you can get the profile and you don't need to do crazy stuff to have them. You can just go and scrape them periodically. So yeah, that was an easy summary to do. Just use the Kubernetes API and to create a bridge that will collect profiles and we'll push them to influx it in. Like the profiles are continuous gathered from all the application that we run. The way we communicate to the collector what has to be scraped is via annotation. So if you use Kubernetes, you know annotation for sure. They are very well known concepts. So you can label and annotate almost everything in your every resources in Kubernetes and labels are very commonly used to do filtering. Annotation usually change the behavior of what you can do with the resource. So that's why I'm using annotation for that. And the annotations are very similar from the Prometheus one. And so you can see that all the pods that has the pprof.com slash enables equals true has an annotation. They are the target for the collector, for the Chrome job. Prometheus comes by default it looks for the port 6060. So the IP, we don't need the IP address because we get the list of pods that has the annotation and enables equals true. We get the IP from the response. And by default it looks for the port 6060 and the root slash the bug slash prof. Because that's how it usually works. You can override those configuration using the pprof.com slash prof.com slash port equals 8085. So you can specify a different port in this case 85 and you can specify a different path with prof.com slash path. This service is like a first citizen in the in, in profile file, so you can group, it's like the application name. Because obviously you have to know the application when you are taking the profile to otherwise you will lose context. You can, by default, the collector will use the pod name as a service. But my suggestion is to use the annotation because as you know, the pod name changes every time that a new pod gets rescheduled from the wrapping cassette. So you will end up having a lot of noise and a lot of service name that won't tell you a lot. You can, the pod name will be placed as a label because every profile has the service, the instance and tags. All those information can be used to group and filter by when you look up for profiles later. So service has to be there. Usually it's the name of the application that identifies the application. And as a label, you will have other stuff like the, all the labels of your pod plus the pod name and things like that. So this is how you tell the collector how to scrape and what to scrape. So this is the actual, like, we, that's the infrastructure we have. So there is the Kubernetes API on the left. And in the middle, there is our Chrome job that is called kprofefe. And kprofefe runs heavy 10 minutes. It goes and asks for the Kubernetes API, the list of pods, death, you, that it has to scrape. So it gets the list of pods and it starts to collect profiles from each one of them directly, like just bringing them via the IP of the pod. When it gets the profiles, it push them to the profile of the collector. So the collector, either the central repository will organize them, order them and store them in the database, in our case, as they say, S3. And from there, they are available for developers to make query on them. So, yeah, as I say, like developers now don't have to care about what they, where the network looks like or how to log in the QCTR or whatever, they have an API that they can call. And they will always know that they will have profiles available at the interval that they choose or that you choose for them based on the cron job or on the schedule of the cron job. This is super important because you are giving to them the possibility to do what they are looking for without bothering you. That it's a very effective way to simplify life for everybody. So yeah, I mean, I lie because I say that they don't have to care about the QCTR anymore, but that's not particularly true. I mean, other than the cron job, that the application that runs inside the cron job, QBPROF also expose or provides a binary that is a QCTR plugin. So you can, if you want, install the QCTR Profefe plugin and it is the entry point for Profefe from your QCTR. You can do stuff like QCTR Profefe capture. And as you can see, if you use the same flavor of the QCTR, like traditional native command, you have dash N to filter by namespace and you can place one or more pod name or you can also use the dash L to do label filtering. So this is very important because I'm very... When I develop, I really try to be as friendly as I can. So in Go, you can be that friendly because we have a lot of memories that we can get from Kubernetes itself. There are a bunch of articles about that on my blog as well. So yeah, this is the way that you can capture profile. And when you do that, your profiles get stored locally, but you can also... So it's a very good way, even if you don't use Profefe to have a way to get profiles. Because what this command will do, it will make API request to Kubernetes and it will ask for the pod called InfluxDB V2 inside the ops namespace and it will transparently for you to do what the QCTL does with the port forwarding. So it will open a port forwarding. So your QCTL locally will be able to reach the container and it will download a profile. That's easy like that. You can, as I said, even get it down locally or push it to profile. So you will get that. You will get that. You are now that you can share with your colleagues. So, and I told you about URL. So this is an example. This is the same like command we know, like go to pprof, the same one we saw before, but we are not reaching our application. We are asking, we are making a query to the pprof repository. And as you can see, there is an endpoint that is called, the most useful one is the one that I'm showing you here and it's the API slash zero slash profiles slash merge. What the merge will do, it will give you back a profile just like the one you get like normally from your application. But it will be a merge in time from all the profiles that you collected for a particular service while there is a service equals out. In the, as a query pattern, you can, you have to specify the type of the profile. So CBU, mutex, go to teams, otherwise it won't be able to do an effective merge because merge can be done only for profiles that comes from the same type. And you have to specify a from and to as well. So it will know when to range from. And obviously you can use label selector to, you know, even filter deeply the profiles you're looking for. So you will make, if you are labeling your application with your path with the go runtime that the application is compiled to, you can filter by go versions and make a comparison between how the same application works with two different runtimes, for example. And yeah, other than the merge one, you can also like do what the REST API usually allow you to do. So you can, you know, list all the profiles by service or you can disturb by service and type or you can get only one. So you will have an IP, an ID and you can just get the profile for one specific ID. So it's very flexible. And this is the reason why I got in love with the project because the, you know, the API was already done and it was working. So I just had to hook it up in my environment. And I was sure that making all this stuff from zero internally, usually you end up with a crafted like pipeline that is not like as good as this one that is made from a community. So yeah, I mean, the number of pods can become like King Crow like usually because, you know, if you have 150 containers running and you take a profile like every 10 minutes, you get a lot of profiles per hours. And so, be careful about where you store them and for how long you store them. So that's why we use S3 because it has TTL as well. So the resources, we keep them for a month. And the good part is that we didn't do that yet but we will do it at some point. You can also, you know, we delete them after TTL but what you can do, you can actually merge them and keep less of them with less functionality. So you are kind of down-sampling them. Another like consequences about having so many profiles is that you have really a lot of information. And like those informations are crucial for devs but also for us because you have CPU memory and so on. And you can correlate them across binaries. So all those profiles like contains like information about the amount of CPU used by the functions. So you can ask, potentially you can ask across all the profiles, the top 10 CPU intensive function. So if somebody comes, if your managers come to USA we spend too much money on Amazon or Google Cloud we have to be more conscious about how we deploy in terms of what we deploy in terms of performance. You have a way to say, okay, those are the 10 top intensive function across the board. So across all my microservices. And you can start from the most offending one and save some money and cut some costs. So yeah, it's a very effective way to build bridges between dev and hops. As I say, hops deeply care about like CPU memory those kinds of stuff because they have to keep the infrastructure healthy for everybody. So you usually end up like opening, when does the CPU spike as an ops spike? You start to look at the servers and you start to figure out what's going on. At some point, if you can fix that you call the developer, the developer, maybe the developer will join on board like two hours later because that's how it usually works. And maybe the outages will be already over. How do you fix that? So you get, in this case, you get a profile from two hours later because you have one and you can troubleshoot that. Or you can actually analyze the profile itself and simplify it like sample it and push it in a time series. And that's what we actually do. We started to do it recently. So we don't have a lot of like graphs or like analysis that we did. So what I'm sharing here for now is the infrastructure. What we do with the data, it's still something that we are figuring out. But I mean, this is the idea as soon as the profiles lens to the collector to profile that is the logo on the left, we store them to infuse the data through S3, as I say. And S3 can be observed by Lambda functions. So we created a trigger that calls a Lambda function every time there is a new job on the S3 bucket. So what the Lambda function does, it downloads the profile, it will sample it and it will push a sample to impact the data. What I mean with sample, the sample for us at the moment is what you get when you do top 10 in the in a pre-prof profiles. So it's the top 10 of the most heavy UZ function for the profiles that you're looking at. So if you have a CPU profile or a memory profile, we sample the top 10 functions that are using most memory or CPU. So we have the, from that point in time, we have them as, we have them as a, crealable data from inside InfuxDB. So we can build graphs or we can compare or we can do aggregation or we can answer the kind of questions I had in the previous slide. So give me the top 10 function across the board so across all my binaries that are using more memory. We can do that because we have the ability to sample and we have the data in a shape that is actually crealable. So I placed some link here, the GitHub repository for Profefe. As I saw on the line, there is a collector that is the centralized repository. Like we call it collector, I think we should change name to repository, but so that's why maybe I confused the terms a little bit. And there is also a profefe slash kubedash profefe that is the bridge between Kubernetes and profefe. So that's repository, the second repository contains the cron job and the QCTL plugin. But there are also a bunch of, like the second one is our research about Google because they also have a continuous profiling infrastructure and you can use it in Google Cloud in Stackdriver. And so it's a very good paper to understand why and how it's useful if it's not clear from this presentation. And, or if you want to go deeper, the third link is a very nice post about how to profiling go. So if you, if for you, those concepts are new, the PIPROF concept is our news, just have a look at that link is super useful and well done. The third library, the first link is a repository that come from Google and it is actually the one that I'm using to do the sampling from PIPROF to in flexibility. So it's a library that allows you to read a profile, disassemble it and do whatever you want with it. And yeah, the last one is my work. So thank you. That's the end of my presentation. I didn't have the opportunity to see the question. So let me see. Well, let me say thank you for the representation, John Luca. Again, there is time for questions now. So if you haven't already put them into the Q&A box, feel free to ask them now. The Q&A tab is just on the bottom of the screen. So as far as you've already found it, maybe let's start with the first one from David who's asking if Q-Control port forward, doesn't it always work to connect to a pod in order to receive a profile? Yeah, I mean, I decided to implement the QCTL capture in that way because it usually runs from your local laptop and Q-Port forwarding from my understanding and from my experience is the way that usually works. So it's the most common that to find open and workable. If internally from the Chrome job doesn't use port forwarding because it would be too heavy and useless because you can't configure your network and your Kubernetes network to allow the pod runnings inside to reach the others. So now this is how it is implemented now. I presume that as more people will onboard the project as more like we will have to implement different ways based on other common access point. Awesome. So let's continue with Abel Karim. Abdel Karim, sorry. Does the profiling affect the performance in a production environment? I mean, I don't have a strong answer for that, but the community, you will find almost like all the applications like exposing it internally and it doesn't affect the runtime in a way that there are stuff that will slow it more sooner. Oh, yeah, cool. So Alejandro is asking if this applies to other programming languages as well or if it's just calling? That's a very nice question. And I saw that Google has a repository that is called Pprof non-JS. So I think it's supporting other language and I think that it's also a C, Pprof like exposition, but I only have experience with Go. Awesome. Kidd is asking if there is a way to dynamically disable and enable pro-feffer metrics for push? Yeah, it really depends how you, which infrastructure you choose to adopt. If you are, if you will go with the same one we use, the Pprof is usually always enabled and if you don't want to collect profiles anymore, you will stop the cron job and it will stop connecting metrics. The cron job, I didn't show it, but it also, it is very dynamic. So it's support, do not imagine like that you have only one cron job that will scrape all the old profiles. We have one because it's enough for us, but it supports labeled selection and namespace selection filtering like the QCT added pro-feffer capture command. So you can actually run as many cron job you want. You can also, you can, you know, and they can connect, they can target a subset of points. So you can also do stuff like, okay, this cron job will look like for, will profile the front end namespace and it will run every 10 minutes because maybe you need that cadence and you have another cron job that will look for all the other namespace that will run every day because you don't need that granularity for all the rest. So it's very dynamic and scalable in this way and it's also very easy to implement because you just have to deploy more cron jobs. And if you are going with the other implementation, so if you will instrument your application with the people of agent library, you are in the application. So you have code, so you can enable and disable it as you wish. Cool, Marianne, please put it into the Q&A box that we can take it off. Also please do think about other questions as we have some minutes left, but until then someone anonymous asked, what is the performance overhead of profiling and production? Yeah, I'm not aware of like, as I said, like numbers around that. So from my experience, we keep it running all the time in all our applications and we don't see degradation in any way or stuff that we can point to the profiling. And from my experience, it is a well, I mean, it's not something that we do alone. Awesome, so there is Barri's question in the Q&A button as well. Thank you for that. Is there any tool support enabling to correlate profiles with results from distributed tracing? Well, the problem is that they have a very different grouping techniques, because usually distributed tracing, if I understand it in a good way, like open tracing, open telemetry and stuff like that, Yager, they are grouped by recast. With profiles, you are measuring the runtime. So you can do tracing inside the runtime, but I like to end exposing them with the P-prof as well, but I'm not, I don't have experience, I don't know, I don't know the answer. Cool, thank you so much. Oh, just in time, another one. What are the benefits over other CNCF-graduated tracing projects? Yeah, I mean, this is not a tracing project at all. So it's, as I say, like profiles are, they give you information about how your runtime is performing overall. So usually, like if you are speaking about like distributed tracing, like as I said, Yager, Zipkin, open tracing, they usually are by recast. This is for the runtime of the, like how your binary runtime is working. Okay, maybe to pronounce something of the question again, it's the last one, what are the benefits over other projects in general? Like are there other profiling projects, maybe even what's the benefit? Oh, I mean, yeah, I mean, it's usually, I mean, other projects you have to follow up in tracing, you all open telemetry, you actually have to instrument your application and you have to start and stop spans. They gives you a different picture of how your application is performing. Profiles are very like lightweight in terms of how it costs to enable them. It's just a line, as I showed. And with this infrastructure, you can connect them very easily. So I think you can have both. We have both, we have traces and profiles and metrics and logs we have. Because as I said, like the very effective way to do, to understand how an application or a system, entire system works is to have snapshots of it. And as more information you can get, you can have in that snapshots has most you can understand from the application. And our snapshots are made from logs, events like Prometheus events, traces like open tracing, open telemetry and Bpro profiles. So those are the languages that our application speaks and we collect them and analyze them to figure out what's going on in production. Very good. Any last questions? Awesome, great. Thank you so much, Landauka, for the great presentation. All right, that's all the questions we have for today. Thank you for joining us. The webinar recording and slides will be online later today. So keep an eye for that. We are looking forward to seeing you at a future CNCF webinar. Have a great day. Thank you. Bye-bye.