 Excellent. I'd like to thank everyone who is joining us today. Welcome to today's CNCF webinar, Observability via Red in the Age of Kubernetes. I'm Lockie Evenson, Principal Program Manager at Microsoft and Cloud Native Ambassador. I'll be moderating today's webinar and we would like to welcome our presenters today. We have Dave McAllister, Senior Technical Evangelist at Splunk and Jeff Lowe, Director of Product Marketing at Splunk. A few housekeeping items before we get started. During the webinar you are not able to talk as an attendee. There is a Q&A box at the bottom of your screen. Feel free to drop your questions in there and we'll get to as many of those as we can at the end. This is an official webinar of the CNCF and as such is subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of the code of conduct. Basically, please be respectful of all your fellow participants and presenters. And with that I will hand it over to Dave and Jeff to kick off today's presentation. Thank you. Good day. I'm Dave McAllister and I'll be doing the talking. Jeff will be joining in to show some of the concepts that I'm talking about here. I am planning on covering a little bit about observability. I'll talk a very little bit about how observability and Kubernetes work together. And then I'll step into a discussion around the monitoring concept of red inside of this. But let's start off here. In an age of observability, as it being one of the new phrases that we're using a lot here, this is still perhaps one of my favorite quotes. You see, but you do not observe. Arthur Conan Doyle wrote this back in a scandal in Bohemia, and it applies quite well because when we look around here at observability, especially as things get more complex here, it seems that people look sort of across the top. And as such, they miss all of the underlying functionality. And therefore we need to be able to look at multiple dimensions and look at multiple directions as they cross through these boundaries to really understand what's going on. So observability really is a signal-to-noise problem, but it's not. It's actually signals to noise. And red is providing a useful filtering method here. In general, when we look at this, we have operational data that's coming in here. And there are two ways that we can help clarify the signals. We can either reduce the number of signals that are inbound so that we are reducing the noise level adjacent to those signals. But that also tends to reduce the fidelity, as well as the capacity for answering the question that observability starts to look for, which is how do we find out about the unknown unknowns here? The second side is the more preferred side. Let's take a look at the signals. We filter them post-reception. That clarifies the signal side by reducing the noise, but we don't have the same loss capabilities. And we can also defilter the filtering and come back to what the original signal itself was. That becomes really useful as we get into more complex environments. And Kubernetes definitely drives some of the more complex environments that we look at here. So, you know, TLDR here, observability is the quality of software, services, platforms of products that allow us to understand how systems are behaving. And the more technical model from control theory here is designing or defining the exposure of state variables in a manner to allow the inference of internal behavior. You see a lot of this actually in regular life. You see this, for instance, if the heat index on your car goes to the hot unit, you can tell that something is going wrong with the car. But by looking at the additional signals that you can get by connecting the new computer functionalities into the car, you can literally determine more about what the impact was, as well as what the underlying behavior looks like for that. So the ability really comes down to answering the questions of how is the system behaving, why is it behaving that way, as well as the attempt to predict how it will behave in the future. And observability is focused on both the known problems, as well as the unknown problems that arise in everyday life. So, with that in mind, observability is definitely not one dimensional. You know, if you think about that internal state inferred from external X points. Observability is a property of the system in itself is not a tool. There are tools that assist in observability, but it can't just be there is an observability tool inside of here. And because of the fact that we need to be able to see everything that's going on here, it should consist of things like logs, like monitoring events tracing and anything else that applies to those those functions, what you can read in here. For instance, there's a customer I know who actually brings in the data from their Twitter feed, because right now they know that things that will start going wrong, and their customers will start asking questions on Twitter, long before any of their current environmentals will trigger that. And so they actually monitor across this, looking for a certain keywords inside of here and raise alert structures across that environment. Observability also includes elements of both metrics and time for here. It crosses the boundaries between applications and services, as well as disciplines. So it crosses not only from the it off side into the dev side, but it can also cross into the business aspects here. And finally, anything that slows you down is bad, because timing is responsible and the response becomes increasingly important as we go across this model here. So, observability isn't just monitoring. Observability is a system attribute monitoring really is a verb here. And again, there aren't three pillars, even though we will talk about pillars here. There are lots of signals that come across in this line here. And you need signals is you haven't even thought of today. And those signals themselves are not static. They are constantly evolving and constantly changing. And because the nature of the microservices environments that we tend to live in when we look at this, we tend to see that we have things that are also ephemeral in nature. So all these things together lead into that. Now you'll hear about this and we use this a lot because these are the primary pillars of observability. It's the metrics. How do I tell that something has gone wrong? Do I have a problem? The metrics let you start looking at the alert capabilities inside of this troubleshoot traces. Where is the problem down as far as close to I can get inside of that. And finally, what's causing the problem pinpointed the functionality here really is the aspect of the logs and log structures here. So, you'll also hear a little bit, but everything is an event. Everything is an event, but events are only exist if they're recorded and measured. You know, this is standard if the tree falls in the forest and no one hears it, did it happen? Can you prove that it happened? And can you backtrack to when it happened? And maybe even what caused it to happen here. The metrics become very compact. They're efficient. They may not be sufficient to be able to determine all the aspects that we need to look at inside of observability. Logs and events are full fidelity, but they can be relatively bulky to capture, as well as dependent on the nature of the person who decided to capture the materials around that. In my past, I've seen a log that that literally comes back and says, oh, I'm here, which is not actually really useful for anything that I've ever found here. Traces are cool, but they require work instrumentation, whether it be auto instrumentation or capturing pre instrumentated functionality. If you do take a work to establish, they also can be bulky at full capture in their ways around that. So metrics become your points in time. Traces are the users journey through your application and logs are the verbose representation of everything, all the events that have happened inside of here. Logs are linear inside of here. Metrics are useful for the alerting structures. While we've been going through this, we've also seen that the world of operations and app developments have changed. And you all are here because very honestly, things are shifting to the right hand side of this. We're finding people that are really looking at refactoring. We're finding people that are looking at cloud cloud, where they're architecting, where we're looking at loosely coupled microservices and we're looking at serverless functions, be they public private clouds, be they container based, or are those based inside of here. This is heavily powered by orchestration by the necessary the need to be able to manage and control these environments across that and microservices organizations applications themselves can actually be very complex. And so think about it, building continuously distributed distributed continuously available distributed systems themselves is a challenge when you also do it in a distributed development model for a complex environment and this came from Martin Fowler's wonderful episode of microservices for this is a look at the microservices that are making up basically the Amazon co dot UK model with all of the little services introduced inside of here. As you can see, there's a number of these now take that complexity of add elastic capabilities to this. And all of a sudden you have a much larger scale of services that you have to keep track of across larger and larger environments. So why Kubernetes. So Kubernetes orchestrates all this on your behalf. And you're probably very familiar with this, but Kubernetes also adds its own levels of infrastructure, as well as control capabilities that you also need to bring into your space. So you're now looking across not only your application space, but also what the structure of what's going on inside of your orchestration space, all the way down into the infrastructure space. So each of these pieces starts adding this additional complexity. As we cross this. And that means that Kubernetes increases the ephemeral nature, as well as the churn nature inside of here. Kubernetes is heavily in use. This came out of 2018 report, 68% are using Kubernetes in production for here, but it does add that multiple levels of abstraction to monitor containers pods clusters nodes name spaces. And things can happen in seconds, containers can spin up, centers can spend down dynamic workload placement, you don't know where it's going to be placed, you can give directions for what placement you want it to look like. But you don't actually know whether it's going to be there, there or not. And because of that you ended with challenges around monitoring this into in performance of distributed services, because locality matters when you're looking at these pieces. We've also seen additional functionality short service measures to help us control some of this into in performance of distributed systems to manage that functionality as well. And those things also add yet another level of that complexity that we're working about and looking at. So, I want to sort of tailor out a little bit on this by pointing out that observability is a loop dilemma. This is a real time programmer I've worked in process control and skater and all those various other terminologies here. The duality of observability is controllability, just being able to see something and not being able to respond to it is not it's only part of the problem here. Let's talk about telemetry. That's our inputs, the inputs of logs traces metrics and whatever else we want to bring you bring into this, but observability needs to be a loop problem, being an open loop or a closed loop. I need my process to be able to talk through the telemetry into the observability, I need to be able to deal with controllability, either manual or automated back to control the activity of the process for that. This gives us the full control capabilities of which observability is part of, and we need to be able to look at more and more, how we actually respond to the information and respond to the concerns that our observability signals are telling us. So, why red? Well, complexity matters. We've taken a little bit of a look here, and we've already seen we've got lots of moving items here. We've got lots of interrelations here. And unfortunately, we also have a lot of not there now, when you go to look for it, if the pod's not there, where's the data. So we need to be able to have simplicity and abstraction to help resolve the clutter that's going on inside of here to actually start looking at what's going on here. At the same point in time, we have to retain that complexity to look for the gotchas in the aha moments to look for how we resolve problems, as well as how we investigate the activity that's actually happening inside of the system. So what is red? Reds basically starts with a subset of the Google Golden signals. It's more SRE related here. And it was first mentioned by Weebworks, my hat tip to Tom Wolke inside of here. It defines down to rate, errors, and duration. And it was designed for request driven systems, specifically for microservices. And we'll point out that red is useful in any request driven service where you need to know what's happening across the entire environment here. And red gives you the ability to do that from an e-commerce system or an IoT response system, and you can actually see what's happening. You also, at the same point in time, can be looking at this accumulation, this aggregation of information by seeing what its behavior is to the norm as well. So the little side here is showing you the services that are being called here, the request per second that are happening inside of that, the error rate that's showing up for that service, and then what its duration was at a 50% point as well as a 90% point. So red gives you a whole bunch of information, but in a very simple way of looking at it without having any confusion activities. But red by itself is not enough to be able to resolve to break out and figure out what's really going on here. So red is a need for multi-dimensional capacity, or as I like to call it, it's monitoring at the Chuck Norris level, when you see something take action. Metrics lead us to rate. So we actually are using metrics capabilities to determine what the overall rate of functionality is. Each interaction has its own unique metric that we can aggregate these together to see what the system's behavior looks like. Logs usually are related to errors. That's how we see things and report things inside of here. And traces usually are focused on our duration aspects here. Rate tends to give us a recognition of errors, and logs and traces tend to give us recognition of the root causes that are happening inside of this. So each of these pieces can work together to help us respond quickly to the complexity that, again, orchestration, Kubernetes, containers, serverless are all bringing to our new microservices environments. So, let's dig in a little bit more. Rate tends to be the number and size of requests on a network in the system. And these can be HTTP requests, they can be SOAP, they can be REST, they could be middleware functionality Kafka, they could be API calls, RPC calls inside of here. And they can be the overhead of control structures themselves. So for instance, service messages report what its functionality looks like as well. Any environment that fails on peak traffic is a target for rate monitoring. So again, it's not just knowing that necessarily what's happening in terms of request or microservices. It's keeping an eye on what's happening in my overall environment across this. You can kind of think of this as looking at my bandwidth and my bandwidth utilization. And quote John Mashee, a very famous computer scientist, bandwidth costs money. So are you heating peak loads? Are you exceeding your bandwidth? This is the communication pipeline for your apps and your enemy actually is peak load. So rate can tell you quickly where you are in response to peak load, as well as giving you a chance to look and see what the customer activity and the customer experience looks like. So rate will report things like this. You'll see, for instance, in this case, I've got a rate that over a 10 second window is running about 10 requests per second across here. You can see what my peak activity looks like. Going through this, you can see the request rate by endpoint here, and you can see what the top endpoints themselves are looking like. And so this is a very simple example of what a red structure can tell you, looking into just the specific data around the rate of your request rate. So errors, everybody's had errors, problems that result in an incorrect, incomplete or unexpected results here, and they can be widely spread. They can be production load books. They can be peak load books. They can be things that are communicating. There are lots of places that errors can come up here, but errors require rapid response. They also tend to require point specific responses for this. And to do this, you need to be able to dive quickly into the underlying problem that's happening in here. You need high fidelity work here, and you need it as soon as possible here. Honestly, if your business is dependent on the system that failed, as soon as possible might actually not even be fast enough for this. There are lots of examples of companies that have rolled out brand new functionalities, the wonderful Amazon day of 2018, where they had a problem that basically kept people from being able to get enough bandwidth to be able to respond to the changes inside of here. There was an error that showed up in rate functionality, but the error itself required rapid response to prevent the company from continuing to both lose money and end up with unhappy customers here. So an errors example for this, if you see something that's giving, if you're seeing an error rate at the 45% range over a 10 second period, you probably have a problem that's showing up inside here. And similarly, because we need to be able to look at this for multiple dimensions, you can break it down across a time basis, you also break it down and look at the endpoints that are showing up here. And in this case, the specific one I'm seeing that, you know, an API area endpoint here is coming out of my catalog system. My resolution over that point in time is saying that a 10 second delta roll up, I'm seeing 54% error rate that's showing up in API catalogs. And now, because of this duration of functionality, have the basis to be able to go back and say something's wrong in the catalogs, step back in, look at the logs related to that timeframe inside of that specific services and can start figuring out what went wrong and how I can resolve it. So, duration really is all about time, duration is both client side and server side. We tend to find people measure server sides, but client sides may actually be more important than just hard to manage here. And usually this is the domain and discussion of distributed request tracing, open tracing, open census and open telemetry here. The job of this duration really is to bring events into causal order. When did the event happen? How long did it take? Why did it take so long? And finally, what service or microservice was responsible for the delay? As we move into microservices based environments, particularly with the orchestration approach that allows us to have functionality that lives anywhere. Quite often the time between services is as important as the time within services. So generally speaking, we do measure the time from the request, the span, if you will, to the end of the functionality. And that gives us the capability of measuring both the communication pathway as well as the service pathway itself. We may need to be able to drill into that a little bit more. And sometimes it's really useful to bring in something like a service mesh, an envoy and the SDO, to be able to help you understand and control that communication structure itself. In microservices, particularly where we have ephemeral orchestration happening, being able to understand your pathways are incredibly important. So here I can look at this over that same 10 seconds. I can see that my request latency is still running about a second, which isn't actually too bad, could be depending on your functionality here. You can see what the distribution of that functionality is by breaking it down through the various percentiles. You can also look at it by endpoint, which ones are providing the most capability and the most functionality. So why red? Well, red's actually easy to remember, great erasure. And it's a great starting point. If you haven't already looked into moving into observability structures, red is a great starting point. It also, interestingly enough, and we've seen this in multiple cases, reduces the decision fatigue, where everybody gets in a room and goes, No, I need to watch this. No, I need to watch that. Red gives you a common starting point, a standardization and a consistency that you can scratch across teams while retaining the capability to be able to allow each team to do the discovery that's needed for that functionality here. Red also because of the standardization and consistency across its structure and across where it's understood, these pieces are well understood here. It helps with the ability to be able to automate, be able to drive the responses, whether they are a invocation of telling somebody that something is wrong, deciding where it's wrong, or actually, in the age of AI and ML, responding automatically to that structure itself here. And finally, interestingly enough, one of the things we see red used for a lot is as a proxy for user happiness. There was a Google survey, mobile survey about 2018 that pointed out that a user who waited over three seconds for their shopping cart to appear was 86% ready to walk away. And that number went up to five seconds. The number actually of abandoned shopping carts was over 100%, where people would literally see the shopping cart appear and still leave. And that becomes an indication that something is going wrong. If you are not having happy users, and quite often users are driven by the response capabilities and the time capabilities. Red gives you a very quick indicator of what's happening in that very complex environment and how it affects your users. So red is a wonderful category for user happiness. So, here's one of the interesting gotchas inside of here, is that these three components, rate, error, and duration, can interact in interesting ways. Rate can be caused by an error. Something is not responding or a slow consumer. Something is responding slower than behaving inside of here. Errors can be showing up as bandwidth or response times mismatch. Oh, you asked for something, but you did it in an async manner and I'll get to it when I can inside of here. And duration can be literally nothing came back and told me you were ever finished inside of here, or it could be again back to a limit from the infrastructure back to a network bandwidth issue or back to an error bandwidth. So rate, error, and duration can all affect each other. That's why it's really the last place you look. And remember that each of these pieces provides a different insight into how you resolve the problems in your microservices environments. Rate, something is wrong, duration, something is wrong here, errors, this is what's wrong here. So there's some challenges that have shown up here. Traditional solutions tend to have poor visualizations. They tend to be slow at scale here. And what we've seen as traditional approaches to dealing with this tend to be really hard to use. They also don't allow easy disaggregation. Again, if all I can see is the metrics involved here, I can't necessarily drill in to find out what's going. So let's take a moment and actually see how read in monitoring can help you look at problems in microservices under Kubernetes. With that, let me turn this over to Jeff Lowe here. He'll step us into what's going on. Yeah, thanks Dave. Hi everyone my name is Jeff Lowe and I'm director of product marketing here at Splunk, formerly SignalFX. As Dave mentioned, I'm going to walk you through a quick demonstration of how that comes to life. And you know how you can leverage read to understand what's happening in a complex microservices environment, as well as drill down into what's happening in your Kubernetes clusters. Anyways, we are showing a services dashboard here. And as Dave mentioned, there's a number of every metrics rate metrics and duration metrics that you can kind of use to to see what's going on very quickly. What I'm going to do here and I'm going to back out of this just a little bit. So this is a simple topology of a new commerce application. And as I mentioned before, it's not always obvious, you know what's causing a problem in this particular example, and we're looking at our microservices APM UI here you see that there's color coding to kind of show that there's an alarm some alerts happening on all of these individual microservices. So they're all color coded in terms of error rate. If you look at the legend down here in the bottom left. And that ring around this API service here is actually your alerts so if I click on API, which immediately notices that there's an alert that shows a sudden change in error rate. We're using the red metrics and that sudden change to really say okay there's a potential problem here. But again, in a complex environment is not always obvious what's causing the issue. So if we click into this alert we pull up an alert model and we see that sudden change happening right at this time frame on this time series graph here. But you know what we really want to do is drill down and really understand you know what's really going on is it this actual service that's having a problem, or maybe that's a downstream service dependency that's causing the problem. And with some of the built in technologies that we're using to bring this to life. We have this ability to correlate services, you know all the way down to what is actually causing the problem so if you recall, the alert was actually on the API service. And immediately we can see with a very high level of confidence that it's actually not the API service that's having the alert, it's down to the checkout and down to the payment service. And so it's actually the payment service that is having a problem. And we have a number of things that we can show in a below as well as to the right, but what I really want to dig into for the sake of this webinar is really how we do a service to infrastructure level correlation. So we're starting off with these red metrics and then getting down into the Kubernetes and so what you can see over here are the correlated infrastructure components. So this is the infrastructure on which the services are running. And you can see there's a service here called Kubernetes pod and it has that payment service instance number six and so I'm going to click on this immediately to drill down. And what it's going to show me are the infrastructure the Kubernetes containers on which we're running. The particular service is running in this pod. It's running on a set of nodes and cluster, but this particular pod has nothing immediately obvious in terms of errors. See if I look around on this Kubernetes navigator. I see some pod properties but I see that the CPU doesn't look too bad. There's nothing obviously that's changing on the memory utilization. I didn't really change it on the pod throughput. I look down here below on the containers in this pod and nothing immediately jumps out as something that would potentially cause that alert on the API service that we saw earlier. So what I'm going to do is I'm going to back up basically go up the food chain or back up into the hierarchy and click on node detail. So by clicking on node detail. I can immediately see that there's a potential problem here. And what I want to call your attention to is this kind of second graph in the middle this memory percent utilization by pod, and you can see that there's a pod in pink that's using quite a bit of memory. In fact, I'm sorry to the containers actually running in this pod. There's a number of containers and this particular one container named DC 10 dash robot happens to be correlated with a not suspicious name at all bad robot workload. And it's chewing up roughly 90% of this node capacity. And so something's fishy with this particular container. So let's go ahead and investigate this guy a little bit further. Click on him. And I can immediately see in this info window, you know, all the properties that were used to deploy this container. And by scrolling down, I immediately see a potential issue, something around this memory limit, which is probably why the memory utilization is so high, you know, whoever deployed this container did not set a memory limit. That's why it's just chewing up all the memory. And therefore it's impacting neighboring services, specifically the payment service, which actually had nothing wrong. It just happened to be deployed on a piece of infrastructure on a container on a node that actually had noisy neighbor. So the idea here is again, using red to kind of very quickly understand what's going on in a complex environment, and then use it to drill down and understand, you know, how the infrastructure underlying those services can be monitored as well as Trouble Shot to understand what could be potentially causing alerts in your service, and therefore impacting your users, potentially with you know delays, latencies, potential, you know, service outages. So that was a very quick demonstration of how red comes to life in a complex environment. Thank you, Jeff. Appreciate that. Let me do this and get back to the right, right thing here. So as you saw, we can have things that are identified via this metrics functionality, but the ability again to use this this multi dimensional approach, being able to look at the multiple sets of signals that come into play here. Let us actually go down and determine what's happening inside appears that we can respond to it as fast as possible. So some some sense of observability, particularly when it comes in services. There are two sets of items that are important inside of here. One, the external your customers view is singular. They look at the request they look at the latency and they look at the response, the success capabilities here for this. And the operator's view is over the workload. It's all of the requests, the latencies their rates and the concurrency they're happening here, as well as the system resources and components. So Rida gives again gives you the ability to look at both of these features and figure out what's happening across those some basic philosophy. I once ran a nonprofit for on philosophy capability here. Let's start by pointing out instrumentation by itself is not an answer. I don't care how much you collect, or how much you do. If you can't find the answer the instrumentation doesn't help you. Instrumentation should help you find the answer, and your visualization should help you pinpoint your answer here. The metrics are powerful, but they're not solely sufficient cardinality matters being able to drill into it, being able to look at all of the signals and all of the data that's necessary becomes incredibly important. Finally, you're observing the work, not looking at the service per sets, but you can look at the services anyway. And in particular, you want to look at how the service response to the workload. Again, production, we hear a lot often about I test, I test for production. The problem is that the errors that can show up in production can never actually be tested for. This is one of your unknown unknowns, you do not know necessarily how it's going to respond. So look at how the service response to the workload over both time, as well as in the moment here. And finally, your goal is not really observability. It's around reducing the meantime to detection mean time to response in the meantime to resolution. Shorthand for this, I want to mean reduced my mean time to WTF. What the heck is going on. And so summing up here, observability is more than monitoring monitoring just driving observability uses monitoring capabilities, but it also uses a more functionality to be able to deep dive, go through the levels and figure out what the underlying causes are. Red wins for microservice based applications. Red simplifies as well as stabilizes the ways that we can look at the data so that all teams and all groups can be on the same page as we crossed here. And also tends to simplify the observability of Kubernetes and its complexity, while being able to retain the insights of how the underlying structure is working, as well as how your application is going. And keep in mind that red can interact in interesting ways that what you think maybe the problem can actually be caused by one of the surrounding components. We'll look at all of the components to determine what you need to do and how you need to respond to it. And then finally, find the right tool to give you clarity and insight into what's going on with your microservices environments, particularly in Kubernetes and is orchestration environments. And with that, like to thank you for your time today. And if there are any questions will open up to that. Excellent. Thanks, Dave and Jeff for the great presentation. So we have some time for questions. If you have a question you'd like to ask, please drop it in the Q&A tab at the bottom of your screen. And we'll get to them as many as we can right now. So we have a question here that I'll ask, please explain more what cardinality matters means when it comes to the sufficiency of metrics. Okay, so when we think about cardinality, we can think about in particularly we think about high cardinality and cardinality is a data property that's around how compressible the data is. So it becomes a case of looking at the complexity of the data. So again, in microservice environments, data is complex. You're looking at multiple services you're looking at multiple communication pathways, you're looking at things that are starting and stopping inside of here, you can even be looking at dimensions like geolocation, you're going to be looking at what browsers are being driven from a consumer environment for this. And each of those pieces can have an impact on any of those functions, whether it be a rate functionality, whether it be the duration that's happening. So cardinality matters, because it shows you the depth of what's happening inside of that. Yeah, that's a great question on cardinality. It's something that's very critical to microservice environments in particular. I just wanted to add one quick thing Dave touched on compressibility. One way to think about cardinality is uniqueness of data. It's one of those things where, you know, Dave touched on lots of different microservices, even within microservices lots of instances within very ephemeral highly ephemeral infrastructure like containers, you're going to have multiple devices coming up and down, each with their own unique container IDs. And so cardinality becomes one of those things that quickly expands the set of unique data that you need to now analyze. And so, you know, in these environments, leveraging red metrics and distributed tracing. It's one of those things that you absolutely have to account for when you're building and planning, you know, your monitoring and observability solutions. Excellent. Thank you. There is another question that we have here is Falco, a good tool for red observability. So Falco is a new offer from Cystic. You know, I've seen it in an early demonstration. I think it's a fantastic approach for particularly the security side of things. What they've done really well is combine both the basic monitoring and fundamental security aspects into that open source offer. You know, I can't quite comment on other vendors products, but you know, I like the approach of combining essentially from a very macro perspective DevOps and DevSecOps into a single solution because at the end of the day, you know, once you've got, you know, your monitoring, your complexity, kind of all your dependencies figured out, one of the natural next steps is to make sure that your security is buttoned up as well. And so, you know, I think it's a very smart approach. But that's really all I can say about that. Okay, so we have no further questions. I'll solicit one more time. If there are any questions, please go ahead and click the Q&A box down the bottom and ask them there and we'll give it a moment. Okay. Thank you, Dave and Jeff for a great presentation. That's all the time we have for today. And thank you very much for joining us. The webinar recording and slides will be online later today. And we look forward to seeing you at a future CNCF webinar. Have a wonderful day. Thank you.