 All right everybody, well, welcome back to another OpenShift Commons operator hour like to do on Wednesdays. We have someone, one of the many folks who have built operators that run on OpenShift come in, talk about what they're doing, why they built it, and what their operators do. And today, we're really pleased to have Instana here and Mateus Lupkin, and he's going to talk about using Instana's offerings to successfully manage applications running in Kubernetes. So, as he's want to say in his title context is King, I'm going to let him explain that and introduce himself and then we'll have live Q and A at the end. So, thank you all for joining us today and take it away, Mateus. Yeah, thank you very much, Diane. Yeah, my name is Mateus Lupkin. And today, I'd like to talk a little bit of how, you know, our experience of running our operator of running Kubernetes workloads and what we see from our customers of the challenges they have running these. Just a few words about myself on the PM for Kubernetes and infrastructure at Instana. Got some experience on software development all over the place. I've actually been with RedHand and done some interesting stuff over there. And now, with Instana, we are, you know, focusing on helping developers and DevOps teams to kind of manage all this, you know, crazy, crazy things that we're seeing. And operators help a lot. You know, I hope our this talk helps a little bit. And yeah, some of all the experience that we have. All right. So basically, one talk in the slide, right? This is the agenda. This is the talk. So what I would like to talk about today is basically give a, you know, an understanding of how, of what to look for if you're running an application in Kubernetes. And, you know, we kind of like separated the different aspects of different perspectives and, you know, came up with these three perspective, basic perspective, different views on this. And that's what I would like to share today and also give you very, very tangible ways of doing this on the slides. The content have a lot of, you know, further links to deep dive on how to get it going for yourself. I hope there's a lot of things to take away. So this was actually something that, you know, brought up in May, but I love the phrasing of it. Kubernetes is very good at solving the problem. It reduces in your environment. Right. So, you know, Kubernetes and OpenShift are awesome platforms, are awesome platform for distributed, for managing distributed setups. But at the same time, it also introduces a lot of the complexity that we might or might have not been exposed to initially. And I think this is a real challenge, right? It's yes, we're techies and we want to get this all solved. But it's not always simple, right? There's challenges for developers. There's challenges for DevOps. And I think there's a lot of things that we can do better. And, you know, hopefully, we're doing part of it today. So, if we take a step back or if we look at what a Kubernetes application actually introduces, it's actually quite a few new attributes that make these things really challenging. But first of all, it's, you know, we, you know, we've talked about microservices-based applications before. But to be honest, that hasn't been the scale as I've seen before with Kubernetes. And with Kubernetes, there's, you know, a decent application certainly consists of hundreds, a thousand microservices, very, very different. And you can talk about pros and cons of microservices. But the matter of the fact is Kubernetes allows us and gives a lot of means to doing so. So, you know, people are taking advantage of it, but they also need to manage these. The change also increases dramatically with developers. OpenShift provides a great platform for continuous deployment. And, you know, the increased change of these is just tremendous. And parts come and go, right? So, you've got these auto scalers, you've got rolling deployments, and everything is getting ephemeral, right? So, you know, also something that, you know, at least for me, but also what I'm seeing from customers is something that they've not been, you know, really used to it. And, you know, containers are awesome, right? We've got now the whole fleet of suddenly, you know, just packing everything in a container, putting the runtime on it, and then just pick the technology you want. Everything is polyglot now, but who manages this, right? Who's taking care of this? So, when you are then looking off of what does that mean? What are the problems? What are the challenges that you face? These are some of them that I hear here most. If they're all using Instano or any other tool of looking off of what the Kubernetes applications are doing is like, okay, so first of all, what is actually affected? Like, I'm building my application, it has a problem, what are actually affected? And what metrics do I need to look at? Like, what is the CPU utilization? What are the request rates and where do I start looking at? What would be actually, all these metrics actually mean, right? Okay, now I've got a list, but how do these work together, right? Now, I've got a couple, what else do I need to look at? And what is the root cause to all this problem? And again, right, this is about developers, DevOps that are now working with this new environment for them new, right? You know, maybe for some of the industries like, you know, been working with this for quite a while, but this is like, you know, being new, right? It's new for them. And operators put another level on it, right? They do codify a lot of these things, but they put another level on these and stitching these things together might have their own means of the sharing that distributed load and putting their own custom resource definitions. And again, like, which metrics do I look at? What do these mean, right? So another layer challenges in managing the Kubernetes application. All right, let's look at it. Let's look at an example that, you know, just, you know, to make it a little bit more, more tangible to discuss over what we were talking about. So this is a very, I don't know if it's a very simple example, but I'm probably not too unrealistic example. Let's say we're building an elastic search application, right? And throwing out using elastic search, you know, the common tool for that. So running this in Kubernetes, you know, obviously means that we have a deployment in the namespace, you know, then the elastic search itself is written in JVM, so that JVM is running in a container on a pod. Most likely we're running this on a Linux host, right? Which is itself a Kubernetes node with its own properties. And all this thing is running in a Kubernetes cluster and some availability, right? So the question is like, what do I need to know? What happens? And I guess, you know, if you're looking at a problem, right, then these all surface, right? So this is the simple example, the simple stack, the more, you know, more or less example is that we're actually talking about a cluster, right? The elastic search is not a single node, but rather a cluster. And each of these, you know, each of these clusters is obviously mentioned in a couple of nodes, and we might have a Spring Boot application or another Java application, you know, most likely with a similar stack. So let's look at a problem that could happen, right? So let's start with a lower right-hand corner, the first one. Let's say that the node has an IO problem, right? And that particular shard in the elastic search cluster, you know, is getting something to look out for. Then, you know, maybe the threat pool itself on the JVM on the elastic search node has a problem and is a warning sign, right? So circling these, the yellow circles are kind of warnings to your system. And then another elastic search node needs to take over and there the threat pool is so overloaded that it actually, you know, goes beyond a certain threshold and that request is acute that much that it, you know, can fulfill the service that it was supposed to. So actually, the overall cluster gets the throughput is decreasing and the performance of my service application is also decreasing. So I think the message I want to bring across is, you know, this is, I wouldn't say too complicated, too far off application or rather simple application, but you can also see that, you know, specific problems could be somewhere anywhere in this setup. And I would like to, you know, give a little bit order into this and give a little bit guidance on where to start and how to look for, right? This is a little example from our tool. So Instana is in monitoring observability tool. We do all sorts of sources of what we gather and we've got this dynamic graph that hooks up that thing. And this is actually, you know, just a visualization is one project of seeing the death star of all these, you know, components hooked up quite fun. Actually, the live version is even fun because it's dynamic and you can see things moving. All right. So how do we get started? You know, let's look at, you know, a suggestion of how to reason the model. So I guess anyone looking at any, you know, ways of managing monitoring any modern application, everyone kind of agrees that the first one to reason about our services. Services to, you know, what the user or the internal user or a dependency within the microservice landscape, you know, needs to provide, you know, is a logical abstraction that we're looking at. And we're going to talk about it. But we're going to, but we can't stop there as I tried to go earlier. So I think the second one, and that's that, you know, obvious obvious for this one for this audience hopefully is the Kubernetes environment, right? The Kubernetes environment, the OpenShift environment to understand of what's happening there to get an overview of the namespaces, the parts, the deployments, other workloads, etc. Now, the third one, and let's see what we can get out of the conversation here is what I would like to argue for is that the infrastructure, infra level is not going away. We tried to build up abstractions, but whatever I'm seeing, like, you know, looking into some some examples, there is always something where you start looking up how a particular container is behaving on a particular host. So infrastructure as I showed earlier with the IO example is still something that we need to look into. But as the talk, right context, the context is king, you need to understand on how things, all these things relate to each other. So that support a dimensional perspective, I'm going to talk about. In one slide again, right, I think, if you reason about of how do we manage Kubernetes applications, then I would, you know, try to start looking at the three core perspectives, the logical service application, the Kubernetes layer in itself, and the infrastructure layer, and how all these things tie together. It's interesting when when I started talking about these that you could abstract these to the specific roles, right. And I think certain roles are naturally bound to one or the other so you can think about the services on being for the developer, the Kubernetes side of things from the DevOps side and the infrastructure on the offside. So I just put them here. But I also think that it's, you know, the nice thing about DevOps is that we're not building up walls, right, so we don't want to cut off and then say I don't I don't care about the rest just give me a host and I'm done with it. You know, we want to combine these so I think it's also important to share this perspective, share these views share these metrics share dashboards between the different organizations. And yeah, I'm looking forward to what you guys think. But that's that's that's my that's our perspective. Now, just just a word. This is one view of this, right, and we, you know, kind of, you know, brought that to Instana, but there's always a lots of all other perspectives, like, you know, end user monitoring business custom metric synthetic development, working security, yada, yada, yada. I've actually had the argument when I was preparing to talk with a colleague of mine that something else is such that more and more important than the three perspective I am talking about here. So, you know, it's just it's one view, right? And if you think others are more important happy again to reason and talk about it. But I think these these are generally applicable. All right, so we've got the three perspectives. So what do you want? What should I look at? How should I look at these? And then last but not least that put them in context. All right, let's start with the service again. How what am I looking for? So the definition for services for me is something that is has some logical context to it, right? So it's, it's implementation and infrastructure independent. And we care about what the service provides to that user, right? We've we've also got, you know, these SNI SNOs, which would work perfectly with it. And the important piece is that it's a logical unit that serves a user a service. Now, the important piece is that we're looking at an implementation independent definition here, because technology specific KPS can be misleading, right? And, you know, maybe you want to exchange the service with a different technology. So, you know, there's so many things that you can consider about the technology. So let's look at the logical in itself, especially in Kubernetes, as I, you know, briefly mentioned with a polyglot environment, there's so many things. So, you know, let's let's take the logical view on this extractor. We don't have to overdo it, but that's that's that's like that. So if you if we're then looking off of what what to observe, what are we looking into their first the first question that you need to answer for yourself for your team is of the granularity of a service. You know, we're we're supporting very heterogeneous environments and I guess, and in Kubernetes, there's there's there's already a structure to what a services, but it doesn't have to be right. You're not bound. And I think you shouldn't bound yourself to a Kubernetes service in itself. But, you know, think again, service as a logical unit for yourself. And if you, you know, have an old web app, a web server, then you might, you know, split different endpoints. You know, you might have different granularities of what works for you. And it's not we have the default is something that is named and a certain type like HTTP database and like. But again, that's that's up to you on the granularity of the service. The other part is that, you know, need to consider is some sort of higher level assemblies. You know, given the operator, right operator is a good means of stitching these things together. There's also the application CRD or or helm. But, you know, it doesn't have to be on the Kubernetes side of things. You can also think about it differently of, you know, many, maybe some other back end unit is also belonging to that, you know, logical assembly. And I'm making this term very, very loosely. We're calling these application perspectives. But, you know, whatever, whatever works for you to combine a couple of logical services together to then, you know, serve of what was needed. There's pretty, there's a pretty good understanding right now of what to observe. The foregone sickness by the will as our e-book happened to do these latency traffic error saturation. Tom Wilkie from we for Scafana has introduced the red method, which resonates a little bit more with me because it takes out the saturation, which is a technology specific, often a very technology specific component. That said, you know, both work on I'm going to go with the rate errors duration throughout throughout the. So these are these are just some examples on the left hand side, you see the it's done a dashboard right where we, you know, show the information of that service. But obviously, this is not bound to any tool whatsoever. And so here in the right hand side of just, you know, introduced a Grafana dashboard that shows similar, similar stats, right? And in OpenShift you get similar views of, you know, showing the traffic, showing the errors. And red request errors and duration. The question is, if we're looking at this, it's like how, okay, that's great. Like, you know, let's, let's look at this example and let's look at the data, but how do I get to this data? And there's many, many different ways of doing these. The, I guess the most common one, at least out there in the space that we're capturing these natively or with some library out of the workload itself. And in the OpenShift data space, Prometheus is the standard. And if we're looking at Java again, there's a tons of tons of options up just using, using a specific library called Klein Java, JMEX Explorer or Micrometer Explorer. So that's the most common one we're seeing. And, you know, also probably very straightforward. A different way of doing this is actually capturing this from a distributed traces. So with distributed traces, something that it's done as built upon is that you, you know, look at the traces between the different, between the different services and kind of use these traces to calculate the different KPIs. The advantage with that is that, A, it's like you don't have to do anything. If you're looking, for example, if you're using a service match or if you're using Instana, you don't have to do anything. You can capture them automatically. And the other one advantages, at least with Instana, that you can dynamically change this. Instana and other tools, you can dynamically change the service composition, the granularity, something that I talked earlier about. So that's, you know, an advantage of working with it. Just one note. If you're sampling the traces, you know, please, please be wary about this and store the metrics separately. Actually, there's a good link that I found from a Reddit colleague that just did this with OpenTracing and Geiger to collect and store the application metrics in Kubernetes. But there's another example with OpenZipkin. All right. So we've got the logical service and whatever we do, right, whatever we look at, that's always the starting point. That's where we are basing our SLIs on, our service level indicators, our SLOs, the objectives that we strive for. And it's just the kind of the starting point to it all. If you do one thing, then do this, right? But I think for understanding the whole perspective, to understand everything, I think the other perspectives are equally important. Which brings us to Kubernetes. So Kubernetes as the, you know, probably don't need to talk about that too much for this audience. But the orchestrator of distributed workloads has a lot of new things to take care of that might have been hidden earlier, right? Kubernetes opens up this environment for us and schedules the workloads across the fleet and makes the resources available to the Kubernetes, to the resource available to the actual workload, right? Something that I haven't included in the example earlier, persistent volumes to the elastic search environment. So to the elastic search application, so actually, you know, the database can be stored. So, you know, that's the job of Kubernetes and has these, you know, great APIs everyone is talking about and makes sure that, you know, the different setups are different workloads, new workloads with operators, existing workloads with existing schemes are distributed throughout the system and that the cluster has the knowledge of how to work with this in the beginning and throughout the life cycle. Now, the question is now what do I need to look at again? So I guess the first one is it's just the cluster itself, right, depending of where you are. If someone else is managing it, you probably need to look at the cluster itself as the control plane, just making sure the cluster itself runs. And if there is a problem that again can correlate something with this, right? Is SED behaving as expected? Is it kind of distributed the knowledge through the SED? And that is just, you know, one indicator or one information that you need to gather. Now on the workloads itself, the distribution state is essential. Like how many of my just desired workloads paths are running, right? If it's a demon set, is it evenly distributed on all the nodes that I want to have taken care of? And if I have a deployment, is it, you know, at the scale that I need? And if something is going unavailable, is that still in my budget or is that something that I need to consider? On the workload side of things for the scheduler to run for making sure that the workloads are distributed, we have requests and limits, so we need to put these in consideration in context to the others to make sure that we've got that covered. All right, again, two examples here. So, you know, looking at the different CPU resources of the requests and limits and how the utilization of these are is something if you're looking at from the Kubernetes perspective, the starting point of, you know, investing things. And maybe a little bit on the starting point, that's also something that I found very interesting. These three perspectives, is it no means, is there a, we need to measure the services as I said earlier, but it's not something that you, that it's also very natural for people coming from a different background starting somewhere else, right? So with the Kubernetes environment, maybe I'm more on the DevOps side and then need to level up, need to make sure that a new namespace is running smoothly, right? So I'll start with a namespace. But I guess the important piece is that, you know, when you start in Kubernetes that you also understand which environment, how is it running on the host itself, on the cluster itself, that you have some means of getting there and that you also have some means of understanding what the applications, what are actually the developers putting on there, right? So, you know, having good starting points, I think it's important and linking them again, which I'll talk about. How to measure, that's actually pretty nice in Kubernetes, right? Because it basically is all there, right? We've got a kubestate metrics, which talks about all the things around the workloads, the configuration, it's the service itself and just provides these metrics. On the control plane, we have the individual metrics endpoints. And just, you know, for completeness, there's metrics server to do auto scaling. So everything is already there and we'll have the dashboard there. But with these being provided as standard, there's also like a lot of pre-canned dashboards ready to just, you know, have this perspective ready and easy to see. And we have that in Grafana, we have that in OpenShift. So it's pretty easy to get started with and to enhance with more metrics. Alright, infrastructure. So why do we then now need to look at infrastructure? And the example that I gave should, you know, hint at it, right? The IO problem on the host is something that is needed for the troubleshooting. But also without the troubleshooting, I think it's something that we shouldn't be afraid of or developers should be afraid of having this in mind. And not like, you know, only look at my part and my JVM, but also understanding of how the JVM runs. What are the threads doing? How is it running on the host, right? So, you know, maybe that's one takeaway for the talk, right, is encouraging developers to look into this and understanding of what's happening there, hopefully maybe with this talk a little bit of what to look at. Now, very, very important, right, we talked about the services being the starting point. Right, TPU utilization, as Adrian Cockroft says, is virtually useless as a metric in itself. There are so many assumptions in there that if you just look at the TPU utilization and try to run on it, you will most likely be wrong. But putting that in context and understanding what the service impact is to possible problem on the host is the point I'm trying to get at. And there's a great method, similar to the red method, there's a similar method here by Brent and Greg. Awesome performance engineer, does lots of talks and lots of great books. And the use measures talks about, you know, for all physical server components, so we're looking at CPUs, memories and source for all of these, you know, look at basically three things. First of all, look at the errors, right, if there's an easy way to get at the errors, look at the errors on what do they tell you, look at the utilization, so how busy the resource was serving the work and the situation, so how much work is kind of like queued up for this resource to work at. So on the host level, you know, we have all, you know, these gazillion resources on the host itself or connected to the host. So, you know, something that, you know, probably everyone should, you know, have a look at is the CPU and memory on the usage side of things, on the load side of things. And again, dashboards are all over the place. And I guess it's just, it's important, you know, just, you know, getting familiar with it. This is another example I just found that interesting on the JVM side, right. JVM being such an important, important part of our system is that we kind of, you know, looking into it deeper and looking at, you know, different metrics there, be it threads, be it the heave, the memory pools, and especially the garbage collection is something to, you know, understand and, you know, have ready when you're looking at problems. So, again, so we've got the infrastructure metrics. So how would I get there? The, in Kubernetes, the best way is actually to work with, again, with some exporters. So there's the Node exporter and the JMX exporter. Also, CAdvisor is promoting a couple of good metrics on the infrastructure side of things. But it's important that, you know, some, for some performance reasons, you need to also look at the instrumentation itself that for some performance reasons, more native interpretation might be needed. Our sensor, and, you know, that's true for other sensors, like we also, is that we're, you know, we're doing more, that's roughly 50% of the instrumentation we're getting out of in a native way, just to be more, more performant. All right, so we've got the service, Kubernetes and infrastructure, and the, they're all needed, and I guess, you know, to be taken care of in itself. Now, what do we do with the context? How do we stitch these things together? And that's something that, you know, we within Astana, we've, you know, basically built our tool upon, but it's something that, you know, you can also do yourself. When I was, you know, preparing the talk, I actually realized that there's a pretty good standard in the upcoming, that, you know, hints at a lot of these. And that's the open telemetry. So there's a lot, a lot of things in the open telemetry, but something that for this context I would like to highlight is the resource semantic conventions. So the resource semantics mentioned in open telemetry, they describe how a resource should be considered in a consistent manner. And so there's a couple of, you know, kind of tagging suggestions and open telemetry, there are, they're not only suggestions, but there's also some mandatory required ones and some optional ones. But I think that's a pretty, pretty decent, pretty good starting point if we are thinking about how to correlate these two together. So if you are working with a, if you are, if you've got a service and you pick like a service name and open telemetry talks about a service namespace that make these things unique together. And then you correlate it to a service instance ID, so something that serves the service. And you've got an unique identifier of what, what this thing is that actually serves this, right? Again, and then Sana, we also have the service type, but which, you know, I think makes a certain use cases easier and easier to get at, but, you know, open telemetry does not. Now, so we've got the service, right? And if we're using this tagging theme, then we can start correlating things together. We can see, okay, this service belongs to this container, to this host, to this Kubernetes, you know, pod, for example. And the other way also around, right? The information that we've gathered from all these different other instances are common tagging schemes that we can use to, excuse me, that we can use to correlate one, to the other. Now, a different way of looking this is if we're looking at the trace, base wall of things. And I'm mentioning this explicitly because, as I said earlier, the services that, the service metrics that we gather are based out of traces. So we infer a lot of these, we and others, right? It's not unique to Sana, but those who are working that way infer a lot of these information out of it. And the interesting, or the way to look at it is if you've got traces, right, that you use the trace ID in itself to correlate things. Here's the first one, the example is from Grafana and they talk about how to use the trace ID in locks and use a common service tag throughout the system, something that, you know, this service name here, they're using slightly different but throughout the system so that they can then, you know, look through the service. Something interesting that I found with Zipkin traces that the tag spans have the pod ID and for the service naming that we're looking at doing a reverse lookup on the pod ID and then in which the pod data. The open telemetry talks about making this more and more automatic, right? In an open standard, we already do this as to others. So this is an example and I mean we could also do a short demo if you like, but this is an example of how we are doing this and how this is visualized in Sana. So the, you know, this is our example, but basically we separate these three different perspectives, this application perspectives, Kubernetes application and interest perspectives, and basically on any entity that, you know, you're looking at, you can link to the others. But, you know, conceptually, it's not, you know, you can rebuild this with your own tools or again just give it a try within Sana. So key takeaways. If you like, we've got service, but it is an intro. Please consider all of them. Please also consider them all of them independent and make the best use of when when you're looking at these independently, because there's always someone coming from that particular background. And if you overload them with information from different perspectives, they might be overwhelmed, right? They, these different perspectives share them, right? Make them shareable within the team to ensure a common understanding of what to measure and why do you measure them? Why is this particular saturation metric for your workload the most important one? Last but not least, right? Context is King link these together, make them aware for everyone and link them together so everyone has the same understanding of what you can do. All right. I guess this is it, at least for what I've prepared. And I think we've got time for questions. What I would love you to do is if you could go over to the Instana site and go to the install the operator page just so people have that link too, because I think you go and that would get them at least to know where you live and breathe in Instana as well and where all the docs are as well. You can also go to the Red Hat catalog and grab it from there. Right. So a couple of words to the operator. The Instana operator is actually, we've been really being been stoked about the operator and the Instana operator is actually basically available wherever you like, right? It's obviously installed in the operator hub and you can also get it in directly. We've included it in our environments and how to install the agent or you can just like get the source and get everything from GitHub directly. Now the operator on the or the agent operator does a lot of like nice things for us and it's it helps us distributing of what we do with our agent. So, you know, if we're going back to the talk that I talked about of correlating all these different things together, you know, that's something that the operator, the agent does. And, you know, as you can think of, you know, if we're if we're taking this in the next steps further, we've not only got we've got infrastructure, Kubernetes and services, but you've got all, you know, all mixed in with all the cloud stuff with different and operating environments with different one times. So there's lots and lots of things to do. And the operator helps us on distributing that workload to throughout the, throughout the cluster and just picking a selecting different nodes, putting some intelligence to our operator and making the operator very making the agents very dynamic and and in what they do. And the operator itself is on the agent side and we've got something cooking on the back end side but that's for someone else else to share the later point. Okay, so a little context is came here operator hub dot IO runs all of the open source kubernetes runs anywhere on any Kubernetes and then explaining maybe a little bit Michael what the catalog dot red hat dot com operators are what that's all. Yeah, sure. And I'm so sorry that I'm late. I have no control over this but I'm, I've been working from my cabin in the mountains for the last seven months, and it's like it's a DSL phone line running through the woods so when a moose gets crazy things can go down so I apologize for being so so late, but I did link the Mattias. How are you. It's nice to see. Great. Nice to see you wearing it and in stana. Name badge these days. Question on that, but no, I did link the red hat catalog because our team works with companies like in stana and others to run their operators through the red hat certification process which which really allows customers to know that, you know, all the parts and the details of it are, you know, like the blueberry Pillsbury muffin man seal of approval that the red hat components and the stana components are all supportable and they can use it in a production environment so that's, that's that's where our customers can go to download something and make sure that they're getting genuine intel inside parts. So, yeah, and again, the, the, the, the, the, the, the operator. So you can think of the operator and multiple dimensions. The so so far, we don't have like in our tool dedicated support of monitoring operators so it's it's just used as a custom resource definition, but obviously as it gets more and more used by by developers, it would be one of the additional factors. Okay, we've got the services we've got kubernetes we've got infrastructure. So looking looking at deeper or more intelligent look at the kubernetes and you know, kubernetes layer, the operator gives means of even better understanding and better linking these together and putting some semantics into the operations of let's put, let's, let's take the elastic search, for example, right. And that's totally in a layer that I can think of adding. And, you know, we are running our operator, you know, with great success and are thinking greatly about like how how to leverage the knowledge in an open shift or many kubernetes cluster. And you know, this doesn't just happen by chance. I actually was at a trade show. I think it was probably one of the early kubcon ones that might have I think it might have been in, I think it might have been in Seattle or Portland I forget where and I ran into one of your founders. His name was Pete Abrams, Pete Abrams terrific guy. And I was talking to him about what we're doing and, and in stana was probably one of the first. APM type vendors that ever certified a container for the red hat portfolio and built an operator and, and that was because we were working with them very closely and I used to travel down. Pete invited me to your sales kickoff in Miami. It was probably two or three years ago now. And so me and my team flew down there. We bought, you know, appetizers and drinks for the entire sales organization and stana. So we've actually had a really, really good close working relationship with your whole team, including your marketing people as well for a number of years. So this, this, this, this doesn't just happen by accident. And we're doing these types of things together to make the overall customer experience as good as it possibly can be in a cloud native environment. And we, we, yeah, and we continue, we continue to do so, right? So this is, this is on the agent side, but we also have, you know, lots of back end components. So it's again, for another, for another conversation down the road. I need to get my colleague on online for this, but we're going to continue investing in there. That's cool. Hey, I don't see any other questions coming in about your technology. If anyone else on the go ahead. There is one question someone is asking about the back end operator status that you refer to sort of and said someone else, but can you give us any hints on when that I can't give any hints. I apologize, Chris. I apologize. Maybe I should have you mentioned it. Now, is that when you say the back end operator, you're talking about that right now there's an Instana agent, which is containerized and certified. And then are you talking about Instana the actual the smarts on the back end actually being turned into an operator as well. Exactly. So, so Instana itself. So obviously you have the agent running, but we have an on-prem solution or a self-hosted solution as we like. And Kubernetes is, always has been, or for a very long time, been our primary way of surfacing this. And with, you know, operators with our experiences on the agent side, we're also looking into, you know, what we can do on the back end side to make the on-prem install easier and faster. Okay. And is that being driven by customers because they're saying, you know, we have certain requirements where we need to have the full APM solution inside our infrastructure from a security perspective or something like that. Right. So the traditional on-prem questions apply here, right? So making sure that it's secure, that it's in-house, but also performance reasons of ensuring that it's nearby the actual workloads. So there's a multiple way of reasoning about or motivating this. We don't take a stance right there. We just try to make it as easy as possible and operators and a heterogeneous environment like Kubernetes OpenShift gives us the means of installing it. Yep. Hey, we got another question. I don't know if you can, I'm going to read it here and maybe you can translate for me, Matthias, but so Jeffrey says, hey, in New Relic, as we're using right now, we have the Aptex, which measures satisfaction or response time based against a set threshold to get the insight of the application health. Does it have any corresponding features for Instana? Right. So Aptex is a really, really important aspect of monitoring your applications. Something that we are more leaning towards is the SLI, SLO way of looking at this. So we've just introduced that, that you define service level indicators on your customer journeys, define these and alert on these. And also with our application perspectives, you have much more fine granular control of which aspect you're looking at and alert on these. So right now we don't have like the very, very same equivalent to what Aptex is. But I think if we look at Instana of what we provide, I think at the end you will also be possibly even more liking the way we translate these things. Okay. Hopefully, hopefully, Jeffrey, that addresses your question, if it doesn't, I'm pretty sure that we can get you just about any questions you want. Where would we send people to if they have follow up questions? I mean, my email address is wait at redhat.com. It's just W-A-I-T-E at redhat.com. And I can connect people with just about anyone at any level of the organization. And Instana, I am from top to bottom, very, very close with everyone over there. Mattias, do you have, there you go. There's your email address as well. There's my email address. Instana.com. There's a gazillion ways to reach out, whatever, ping anyone at Instana and we'll get back to you. And Jeffrey, if you would like to talk more about the Aptex standards, I'm happy to go into some detail with you. And especially looking at use cases, like why are you looking at the specific one? That's very important to us is understanding why this particular measure is important and helps you there. And I think we usually got a good answer on that. Hey, Mattias, I really wanted to ask you this at the very beginning, but as I said, I've been dealing with legacy internet issues. Next time, I'm coming to the cabin and I'm going to hang out in the cabin earlier. You want to see what it looks like real quick? I mean, this is, this is. We're all going to the cabin soon. Awesome. This is, this is my front desk right here. Oh, my God. Yeah. Yeah, this is, but I was going to, I was going to ask you, you know, you were, you were at Red Hat for several years and and then you, and then you move to Instana. How lucky are you? I mean, I think that, you know, being able to be a part of that team in this time where everyone needs APM for, you know, helping them to have the visibility and insight into running their business in the hybrid cloud. Are you just absolutely thrilled to be there or I know I will. Well, first of all, I was thrilled also to be at Red Hat. Red Hat was, was really a great time and we, we built some nice, really awesome tools in the, I was more on the developer side of things there and in the code ready. And we really built an awesome tool for analyzing dependencies. So, how about all to all my ex colleagues that that was an awesome tremendous time. And yes, obviously Instana is, is great because we kind of like, you know, are challenged in a new way of, you know, being, being going against other other players in the market. But using this new microservice movement to our advantage and build, I think, you know, something very unique that is just very suited to this new to new environment. Just, you know, just one example, I think, which is really, which is also very dear to my heart is that it's really, really easy to get started. Our agent discovers everything, throws everything on the dashboard, and yes, you need to tweak and configure things, but everything is, it's just there, right? It's, it's just there. And as we're talking about, you know, different perspectives of what people are looking at. I think for myself, but I also hear that from customers, it's just great that that, you know, you've got a platform that you look at and then you've you've seen the majority of things already there. And then you've kind of then dive into details and start tweaking, but you don't, you know, you're not, you're not lost at the beginning. And that's, that's something that I value with Insana a lot about and the whole distributed tracing is just, it's just a fun topic. It's just, it's just a fun technology. Cool. Well, I don't see we have any more questions coming in and I know Chris short is going to let us know that we're that we're just about out of time so I thank you so much for coming on. I mean, I reached out to to to star who's my marketing contact over there and I was like, you got to find me someone like really, really good to be a part of this. It's our, it's our, you know, one of our early on open shift commons briefings. And so we're really, really glad that you guys could help be a part of this today. Glad. Thank you very much for the invitation. Always a pleasure. I'm happy to come back. And it's, yeah, it's, it's, it's some great technologies mixing in together. And yeah, happy to be here. All right. And when you get that back into operator. And you're ready to talk. We, I'm going to, I'm going to let my colleague know and we're going to talk. Yep. Take care.