 Hello everyone. I'm so happy to be here. Let's start with some presentations. So my name is Martin Fuentes. I am Senior Product Manager at Instana for Kubernetes Observability. And I am here today with my colleague, Cedric. Hey, I'm Cedric. I'm a product manager for our distributed tracing and open telemetry particular. Nice to meet you all. Great. So today we have a bunch of slides to share with you. We're going to talk about Kubernetes resource management. What are the different metrics that Kubernetes takes into account for a scheduling pods and containers and what are the different scaling approaches. And then I will hand it over to my colleague Cedric, who is going to talk about how you can actually observe Kubernetes workloads. We are giving an introduction to open telemetry, the different ways you can instrument applications with open telemetry with some examples, and then also some like demo. And with this, I will just start. Let's, so as I mentioned, let's go ahead and talk about Kubernetes resource management. So when we talk about a resource at Kubernetes, we mainly talk about CPU and memory, and those are the most important resources that Kubernetes is managed, and the ones that are taking into account when a Kubernetes is scheduled pods or containers in the cluster. So CPU is one of them. As I mentioned, one CPU unit is equivalent to one physical or virtual CPU. That's the way that Kubernetes measure it. You can actually allocate a fractional parts of a CPU to a workload. And the minimum that you can actually assign or request for a container is one mini CPU, which is of course the thousand part of a CPU. Then for memory, memory is measured in bytes and bytes are just bytes. It supports two different suffixes. You can use quantity suffixes like beta, tera, giga, mega kilo and so on. And as well, it also supports power of two equivalents like PB bytes, the PB bytes, GB bytes and so on. So there's one other resource that it's probably less taking into account and less use, which is the local terminal storage. But for some workloads, it might be important that you make sure that your container is going to run in a node that actually has this kind of storage. And it's also enough space also available. So it's also measured in bytes. It's also supported two different quantity suffixes and power of two. And one thing that it's important to remark if you're actually taking this kind of resource into account that it's not guaranteed that it will be long term availability. So that's something important and it's only about the family storage that lives inside the pool. So for every container in your cluster, you can actually set up what it's called request and limits. So the request is the minimal amount of resources, either CPU or memory that your workload will need to run, and you can specify that so Kubernetes will know in which node of the cluster schedule your containers or pods. The limits are more or less similar, but it says to Kubernetes, which is the threshold that this workload shouldn't be exceeding. So it shouldn't be consuming more than X amount of CPUs or X amount of bytes of memory. The Cube scheduler, which is one of one part of the Kubernetes components will decide in which node the pod will run depending on the availability of resources in that node and the request that were configured for that workload and for that container. Then the Cubelet will reserve at least that amount of resources that were requested in the node to make sure that it's available for the container to run. And also the Cubelet will be the one enforcing that the limits are respected so we can prevent that other containers that are running there will have less resources than they requested or they need to actually run. Now this is how a request and limits are actually configured for a container. So this is a very simple config map with two containers running in a single pod. For each of the containers, there's a configuration for memory and CPU requested, and at the same time limits for the same applications as you can see here. It's very straightforward and very simple as I mentioned here is requesting a 64 maybe bytes of memory and 250 millicores to run this specific container. Now what happens with request and limits how do they actually impact your workloads and the containers running there. So if a container hits the CPU limit. What Kubernetes will do is to throttle the container meaning that the application running there will be probably less performance than it was before, but it won't be terminated or addicted so you will have less performance but your application will still run. On the other hand, if a container hits or exists the memory limit it will be actually terminated by Kubernetes. The pot will die and depending if the pot was managed by an application controller like a deployment stateful set or them on set. Kubernetes will make actually the controller will make sure to spin up a replacement for that pot that just died because of memory consumption. So the decided state is always respected. It's important to take into account that this process could actually happen like in a loop. So if you have a memory leak in your application, it might happen that your application will be dying very often because Kubernetes will always make sure that the memory limit is not surpassed. And it's important also to try to take a look at how your applications are consuming these resources against the request and limits. I tried to bring this example to show that it's not only important to allocate as the minimum that your application needs but also trying to not request more than it actually needs because that that resources will be reserved for the application while it will not be actually using it. So here in this chart for example you see for memory consumption the actual usage of the application is below the request. That means that there's some space here of memory that is not really used by the application but it's still reserved for it because it was requested. So it's really important to make sure that you have also like a visual way to see how the configuration of request and limits is doing together with the consumption of those resources by the application. Moving forward, I was also willing to talk about scaling. So there are two different types of scaling at Kubernetes you have horizontal scaling that also have two different meanings depending if it's for the node or for a pot. At the node level it actually means adding more nodes to a given cluster so you will have more servers in that cluster to allocate workloads while at the pot level it means that you are adding more running replicas to an application for example you have an application that is taking requests from end users and at a certain point you see that the application performance is degrading so you will actually spin up more replicas of that applications to actually allocate more end user requests and in that case it will be horizontally scaling your pot. In the case of vertical scaling at the node level it actually means to modify the active resources of each node so if you have for example a virtual machine running on a physical server you can modify what are the resources available in that virtual machine you are actually vertically scaling that node. And for a pot it means a playing a bit with request and leave it so when you, as I mentioned before it's important to not request more than what your application needs when you're doing that this fine tune of request and limits, taking into account the actual consumption of your application you're actually vertically scaling your containers or pots. I would also like to have one more slide about the horizontal and vertical pot out of scaling so focusing on pots here, because at the end that's why that's what is going to be running in your cluster and the configuration that you have the possibility to impact. There is a way to automatically scale your containers in Kubernetes and it's called HPA and it stands for out horizontal pot out of scaling. And in that case you can set up a query that will tell to Kubernetes starting on which threshold it will start spinning up or down more replicas of that container that you have deployed. This is not possible for any, for all the application controllers or workloads, but you can do it for deployment state full set or replicas set for example. In the case of vertically vertical pot out of scaling. So there's a mechanism that will allow Kubernetes to dynamically adjust the CPU and memory attributes of your pot. So you will actually be, as I mentioned before, modifying the requests for that pot but it can happen also in an automated way by Kubernetes. So now I also wanted to bring you a summary of how you would look at this information or data in an observability tool and it's important to mention that you need to have like visibility for all the different components of the cluster. It's important that the observability tool that you're using allows to see also not only the request and limits for specific containers or pots but also a kind of a summary or roll up of these request limits and resource usage at different levels of the cluster. You can look at that at the node level by namespace or even at the whole cluster. With this and probably before I hand it over to my colleague Cedric, I will just take a look if there are questions. Okay, so don't really see any question from our audience. I encourage you to send us any question or doubt that you have we're here to answer them. So I'm handing the presentation now to my colleague Cedric to take it over. Oh, I can see so wait just one second Cedric because question just popped up. So I have a question from Deepank who asked, do we vertically, so do we vertical scale using Goblin? So in this case it's actually not depending on the technology or the language that you're using as long as you can, you know what are the different resources that your application will use. You have a way to actually tell Kubernetes which is the quality that it has to run and the different thresholds to scale up or down vertically your pots. But you, it's not depending on the language that your application is writing. I hope to answer your question with this. Thank you Deepank. Questions are always great. Let's talk to the actual observing part or is there more, I see one more question. Do you want to answer that? Zuresh asks, during vertical scaling is the container recreated or you know if the scaling was done on the fly? So if it's an auto scaling so if you're actually setting up Kubernetes to scale the cluster in that in that case it will be actually recreated. Oh sorry, no, I am talking about, sorry, Zuresh, you're talking about vertical scaling. It's done dynamically so it's not going to be recreated, it's just, that's it on the fly. It dynamically modifies the request for your workload, sorry. All right. No, it looks to be. Thanks. I think we can catch up with everything. Let's then go to the observing workloads part and I think this part is a bit more preparation, right? We are here for observability and I think we should talk about what we actually mean when we want to establish or facilitate observability. And it's the same whether we talk about Kubernetes, whether we talk about a more traditional schedule like Nomad or just your plain host-based application that you deploy via floppy disks or whatever. So observability is actually inferring the state of a workload by looking at its inputs and outputs, right? So we want to consume signals from a given service. When we talk signals, we usually mean traces, metrics, logs, and then we want to infer the state of the application. We want to go analyze the logs, see if there are any errors in them. We want to take a look at the metrics, see if the numbers are healthy, for example, you know, the processing rate latency is a very good measure. And for tracing, actually, it gets a bit more interesting, right? With tracing, we usually mean distributed tracing so it's not on the one service involved in a transaction, but it's rather a distributed transaction. So you have your bank application on your mobile phone and when you tap, you know, you wire some money to a friend. The request will actually go from your mobile. It will go into, you know, some edge service and from the edge service will probably distribute it into some kind of bank management system. Maybe it even goes to a mainframe in the seller. And yeah, these kinds of transactions get increasingly complex and that's why the distributed part here is very important. We want to collect these signals from services and infrastructure that you are running either inside traditional data centers or inside Kubernetes. Then once you have the signals and, you know, your services are emitting the signals, you want to enable a collection on them, right? So there needs to be some kind of tool that would catch all the traces, catch all the metrics, catch all the logs. It would maybe allow you some post-processing to remove personal information from the data. And then you want to store the signals. If you wanted to take a deep look, you know, five minutes later or the other day, you need to have a storage that is capable of storing these signals for you, which directly caters into the next part. At some point in time, you will want to analyze all the signals with an analytical engine. All in all, traditionally, all that has been very complex and it was a space that was dominated by some vendors. They did the data collection for you and they would provide the analytics engine and that has shifted a bit. And let's take a look at the next slide. What is already in the title of this talk is Open Telemetry. And so Open Telemetry is really an open standard that cares about observability as a whole. They care about all the different steps in the process. They have a component for collecting the data, which is called the collector, which receives processes and exports all the signals at your disposal, basically. They provide some instrumentation libraries and these are the things that are in your processes and they, you know, analyze the data flow in your process. They probably do some instrumentation on the code level so they would be intercepting, for example, Java via bytecode or whatever, and infer traces, logs and metrics for you. And what's interesting is the project even provides auto instrumentation for some runtime such as Java, Node.js, Python and .NET, which basically means removing a lot of manual work of the data collection and just including these Open Telemetry in process components into your application, including the Java agent into your Java applications. The Java agent will automatically instrument your workload and you have minimum time to that even with this open source option. And then the Open Telemetry project cares as well about some deployment helpers. So, yeah, you need to deploy all of these components. You need to instrument your workloads. You need to deploy a collector. You need to make sure that the network thing in your Kubernetes cluster is set up, all of that, and the community is taking care of that. So, there is a Helen chart that will deploy the collector for you. There is a Kubernetes operator that will auto instrument your workloads. So, they will do some transparent modification of your Kubernetes workload definitions and automatically inject instrumentation for some run times, which is a really cool feature. And, yeah, probably one of the best things that this project has established is a shared protocol. So, with Open Telemetry, when I think about Open Telemetry, there is this Open Telemetry line protocol, which is available. It covers, you know, the data transmission between the collection components and a collector or a vendor. And this is a standard thing. So, if you're not satisfied with your current observability solution, you know, you can pack your things and just direct your workloads to report to another vendor without having to re-instrument everything, which is cool. The project is governed by the CNCF. It's an open process everyone can participate, and it's entirely open on GitHub. So, if you're interested, head towards GitHub, look for Open Telemetry, tons and tons of great people and discussions there. Next slide, please. So, let's take a look at the instrumentation. What does it actually look like? So, if you are not in a position where you can or want your workloads to be automatically instrumented, you are probably familiar with the case that you need to pull in a vendor library. You need to, you know, add some code bits in and around your business logic to facilitate collecting traces, for example, right? You would need to encapsulate all of your business logic into some mechanism that would define, hey, this is a transaction. Take care of it and export it to a observability tool with Open Telemetry, and especially with Java, this is crazy. So, what I brought today is the snippet of a, or a snippet from a Docker file. So, you can see it's basically just pulling the plain open JDK 17, and then it is pulling from GitHub the Open Telemetry Java agent, which is a Java file. And then in the CMD line, we are basically incorporating that Java file. We are basically using it as a Java agent. We are attaching it to our JVM. And what that will do, it will automatically modify your code, unknown code paths such as popular web frameworks, and it will wrap around it, do that wrapping for you basically, and automatically collect the trace signals in that case. And, you know, that's it. There is no further work to be done. You can enhance or augment your experience here with an SDK, but it's entirely optional, right? You just pull it in, have it work, and it will even automatically find its way to the Open Telemetry collector. If the collector is available at the standard part, the connection is made automatically. Cool. So much for Java. I brought another example. This is Node.js. It's a bit more complex. So you see that with Open Telemetry, the landscape is not homogenous. It's more, it's a very diverse community and the different projects that are at various stages of maturity. But automatic instrumentation is possible with Node.js as well. And one option would be to use their operator in Kubernetes to automatically do what's on this slide. But since we're here to learn something, I thought it would make sense if we take a look at the steps. So at the left-hand side, there is the Docker file once again. We pulled the plain Node 17 image. We copy some stuff. And what you will notice is that in line 13, we are actually requiring a file that is basically prepended to everything that the application does. So we are requiring that Tracer.js file upfront before starting the application. And the content is on the right-hand side. We are configuring Open Telemetry here. We are configuring basically the, or specifically the Node SDK portion of Open Telemetry. And we wire the OTLP trace and metric exporters here. In line 10, line 14, they can consume a canonical environment arrival, which you can set on your workload so that they find their correct door. And then all you have to do is start the SDK, line 24, and you're done. And by means of the instrumentations config in line 21 and the automatic resource detection on line 20, you are covered. So what this will do is it will automatically start reporting. It will automatically recognize its environment. It will detect whether you are running on GCP, whether you're running on AWS and a Lambda function in a Fargate container in an Azure function or in a Google Cloud Trun container that's facilitated by means of resource detectors. And every signal will be annotated with this information so that you can easily consume it later down the road. And the auto instrumentation registration in line 21, just make sure that standard libraries in the Node runtime and even some community libraries are automatically instrumented. So when you have an express app and you invoke a controller or a route that you have, it will automatically collect band data for you. So collect trace data. Cool. That's it. And that's really it. Instrumentation is done. Make a slide, please. So if you want to learn more about instrumentation, we have a demo application up on GitHub. It's github.com. We have some examples for Java, Python, Node.js, even Golang, an instrumented Nginx, an instrumented page 2 web server. And you can check the project out. I think, in the example, are fairly straightforward. And by that, let's check out if there are any questions. What about Golang's auto instrumentation available? Great question. Why is the question great? Go as a compiled language, right? It's inherently hard to auto instrument these. While there are some proprietary options available for auto instrumentation, the open telemetry project is not currently at the point where they are investing into auto instrumenting and go applications, but that could very well be an enhancement driven by the community. Thanks for the question. Next slide. Remember my first slide? Now we need to collect all the data that our workloads are emitting. And as an open telemetry collector, we have chosen a specific artifact. So instead of using the open telemetry collector, which is a project and it's a specification at the same time, let's deploy the Instana agent as an open telemetry collector. That is our host-based agent, basically, that you would roll out onto your production systems and it would observe anything out of the box. And it can ingest open telemetry for easy augmentation of our already available automatic instrumentation. So this specific example takes care of creating a namespace in your Kubernetes cluster if it doesn't exist. And it will then deploy that hand chart that is specified. It will set some default configuration. And then in the second to last line, we set open telemetry.enabled equals true. And that's all you need. And that facilitates populating our agent in your cluster as an ingress point for open telemetry data. It's addressable via a DNS name in your cluster. And it listens on the standard part. So it's a really transparent process. And now combine that with configuring your workloads. We will take a look at that. But basically data collection is done. You deploy the agent or any other open telemetry collector. And collection is taken care of. Next slide. Yes, configuration of Kubernetes workloads. So as I said, we would be really looking forward towards an easy configuration. And in reality, all you will probably need is the OTA, underscore export, underscore OTP, underscore endpoint environment variable on the process. You modify your deployment or pod specification. And you just inject this environment variable, make it point at the internal DNS name for your open telemetry collector, which in this case refers to our agent. And you would be any other collector. And in addition to that, you would set the OTA service name environment variable, which is the only required thing in open telemetry, everything needs to be recognizable, right. So it requires us to set the service name here and it's easiest to be done from the outside with environment configuration. And by that, our workload configuration is done as well. Easy. Right. Now we are getting to the beefy part, right. So we have taken care of a lot of things. But now we want to analyze on our telemetry data. So you would now select the vendor of choice for your observability needs and you would make sure that your collection mechanism for your open telemetry collector, for example, would shuffle the data from your devices to the vendors, vendors premises. And this example, we already have chosen, we have taken care of that decision right so we want we are in standard we want to use our product in that case, because we know it well and how to use it and what value it brings. So we thought we, we would take a few minutes on a very short demo. And basically by deploying the Instana agent. We already, we already have an analytics backend, and we already configured it. I see there is a rather large question and I would like to address that in the Q&A part after the short demo. Thanks. Okay. Marcin, can you, let me share this screen. So, this is Instana. You can all see it now. And this is a dashboard for a Kubernetes cluster. You are recognizing it because Marcin already showed it to you, right. You see that we denote all the different object types in Kubernetes clusters here and we need to group them under specific clusters. This is our demo environment, which we use for customer presentations and very nice. So it has a bunch of demo data in here and one particular thing I wanted to show you is the hotel shop in action, basically. I promised you that there would be a demo project. And I think this is a very good, very good example. So, since Instana is all about giving you full context throughout analyzing your observability signals, you can always go from infrastructure elements like this Kubernetes cluster to, for example, trace data or logging data have connected. And let's take a look at our analytics section, which is here. Let's say, hey, dear unbounded analytics feature, please give me all the spam data that you have for this specific Kubernetes cluster. And what it will do, it will do it, right. So what we have here is all the trace data produced by that specific cluster grouped by a namespace. And you can see that we recognize objects like services. So for example, the different hotel shop components. And we can analyze trace data for this to get a better picture what the hotel shop really is. Let's look at our applications perspective section application perspectives as a concept that we wrap around the individual services. Basically a way for you as a customer or a user of Instana to kind of segregate your services into more cohesive units. So in this case, let's take a look at the hotel shop, which is one of the application perspectives that I defined taking a look at configuration first. And it's basically denoted here by means of the physical host zone. So I already made dashboard for you to go analyze on the health of your application. What is my application made of I see something here. It says, okay, open time tree shop. Yeah, okay, probably an online shop but what does it consist of. We have a tool that we call the dependency map over here. And this is really a way to analyze your application data flow. So don't mind that hotel shop web service over here it's just sitting around probably doing nothing. The Apache to instrumentation is not very mature yet. So I assume an issue with the actual instrumentation but take a look at the hotel shop engine x front service by by looking at the name this is the your front proxy. It receives calls from a load generator, and we can see all the services that it would talk to right. So a user might check out the shipping options for objects in your store right. They might want to change their passwords through the hotel shop user service. They want to rate products we are the rating service. And then the rating service, for example, it calls into something else. And you can see that the tiny pop up here. This is the ratings database. And the ratings database just receives calls from the rating service as it should. It's all observed all inferred out of the box. But what do we have over here hotel shop card so the card service is talking to the shipping service which makes sense because you want to know shipping options. So we can analyze for the things that you have in your car and we see that this is a HTTP service. And what we can do for every individual service, we go through the dashboard and this is once again an opinionated dashboard. And we can go see all the calls that were created by the open telemetry or to instrumentation and that case the shipping services say it's a Java application. So all of the all of the transactions that you can see here are automatically created there is no additional polishing from our side at least not that I would remember. So let's take a look at this one. It's an HTTP post that was created by the auto instrumentation. We natively blend open telemetry data alongside our auto instrumentation that we have on our more or less proprietary set of things. So you can mix and match open telemetry and job and it's not an auto instrumentation, whenever you like. But this set of services is really all about open telemetry. So here's the call ground. And now it gets interesting right one of the one of the measures of application healthy for healthiness, for example, is call latency. So you want to know how long is my call taking, but you not only want to look at that very specific call, you also want to take a look at all the other other calls that were made to that endpoint. So we give you a way to, you know, very transparency look at the numbers and see all the points in your distributed transaction. So there's this call it enters through the front proxy goes inside the shipping service. There is an HTTP call that's outgoing from the engine X front to the hotel shipping service. And then there is some internal stuff happening here there's a controller involved. And then there was a cart helper involved. So I can do a lot of with this but imagine you are a developer or a DevOps persona. And you want to track down a production issue, we want to see an issue with calls that happen to a specific service. And you can do that by means of checking a box. It's not visible right now let me clear out some filters here. So we see that we have erroneous calls in our systems and we would like to investigate those either because we are on call with we received an issue that has been tracked. Or, you know, a customer is asking why the transaction has failed and we see that oh my gosh there are a lot of fair transactions. Red means contains errors. Take a look at that. Once again the distributed transaction and we immediately see that the payment service actually failed. And we can now infer where the where the distributed transaction actually went wrong. So we know that we have a status code error and then open telemetry there is a specific label that can be applied so there wasn't a status code 500. So something in my applications logic is apparently wrong. So I can go now fix it. And I think it's very straightforward to to do the analysis over here. So that was a tiny glimpse into using open telemetry tracing with men's data we have a lot more in store, ranging from user and user monitoring to our own tracing libraries with some more expert knowledge. But I would really recommend that you check out our demo project. Take a look at the instrumentation and enjoy the power of telemetry. And, you know, make it your own. Well, stop, stop sharing the screen and take care about some questions maybe. So, first of all, thank you since my team put up the slide thing. Thank you. Thanks a lot. Right. There's once again the link to the demo application on GitHub. And if you want to play with Instana we have a demo environment accessible to to you, you can access that from your browser. I think you don't have to really register you need to leave your email address and you can, you know, then play with Instana and the play with environment. Looking at questions, consider the small microservices segment below. There is a question about an SLO bridge as at Ingress Node A and at the same time, we want to find anomaly alerts and containers B5 and B8. Is there a way we can quickly plot a flow chart using the traces or any mechanism to find the issues quickly so we can find the root cause. The goal is to reduce the MTTR minimum time to respond and maintain the healthy SLO service level objective and use the error budget effectively. Well, you're so I suppose you are advanced in your in your DevOps journey so you are playing around with SLOs and SLIs, which is great by the way we can support you with that. And the question is, can you plot a flow chart using traces to find the issues. Yes, so it depends on your analytics engine. In theory, your distributed tracing provider, open telemetry and instrumentation does it as well. They will, you know, supply you with a graph of a call. They will supply you with all of the history of the services that a call traveled through and you have it at your disposal right you have a trace ID you have a span ID for the individual individual components and sub components and you can really use that to to analyze more SLOs is a bit more in the hard analytics topics or you need analytics engine that takes care of analyzing data over time keeping state. It's not a trivial thing to do. Our product has capabilities to do so but since we are talking about open telemetry, which is not yet at the analytics stage, I would say yes, you can do that you just need a vendor that supports it. And then the follow up question from we did. Can we calculate SLOs with open telemetry. Yeah, for sure. So it collects tracing metrics data log data and whatever your objectives are made of. You can, of course, use open telemetry for that. Right so you would really be dependent on your vendor or your data analytics and if you want to host it yourself, for example with a prompt stack or, you know, some some Grafana product for example, very popular choices. There are mechanisms in these analytics engines. But it's really up to to you on your tool to settle for best practices approaching and there's nothing that open telemetry will provide as a best practice to you to define your goals. Can the tool monitor the flow between the clusters. Interesting question. Can you get my screen, because the answer answer is yes. I will show you a very cool feature of Instana that I that I failed to highlight and this is really a pity. It's my bad. It's my bad. So the first things first open telemetry does not care if you're distributed if your transactions just happen on a local host. But they can be distributed by nature they can go beyond a single cluster they can go to your payment provider, they can go through your cloud edge provider for example if they support tracing. For example, you know, cloud fair has an option to do that. And it's really about the data collection if you can get the data sure you have this to distribute an action. And it really doesn't care where it's running is it running the starting in a Kubernetes cluster ending in a lambda function traveling through a cloud run container and then who maybe at some point also going through a mainframe. But one thing our tool does is we do infrastructure correlation right I said it briefly, but for every span so for every sub interaction that we denote here, we can pinpoint to the specific process with with its physical context. Here's a Kubernetes, here's a Linux machine. It's hosting a Kubernetes cluster. There is a Docker container involved Kubernetes port and I can directly jump to the depth specific Python process and see its metrics I can directly judge the health. And if you are talking about like really, really taking this with the distributed aspect beyond what I just said, if you want to make infrastructure correlation you will need some additional help, but it's perfectly possible. It's actually support is it's it's the reason why distributed tracing exists. Yeah. That's all for questions. Any, any other questions. I just see that we have five minutes left. No further questions. Great. That does this either mean it was too much information or you're all busy googling of telemetry right now which is our positioning against diner trade. Okay, now we are getting to the, to the heart pressure. Since we're not focusing on competitive positioning here I would rather like to take the discussion offline. So my current info are available if you want to have that discussion. Hit me up. Well, thank you so much to Martin and said you're further time today and thank you to all the participants who joined us. As a reminder this recording will be on the Linux foundation YouTube page later today. And we hope you're able to join us for future webinars. Have a wonderful day. Thanks for joining.