 Hi, this is Jeff Holmes. I'm going to give you a quick intro and demo of our cluster agent, which is our new Kubernetes monitoring solution we introduced earlier in the year. It runs as a single replica in your cluster. So what that means is it runs as a single pod with a single container. And so it's very lightweight. It doesn't require any privileged access. And the way it works is it will pull metrics from the metric server. So that's often included out of the box in Kubernetes distributions. If not, it's very simple to deploy. And that will provide to the cluster agent not only information about the status, capacity, the inventory of objects running in a cluster, but also the health of applications that are running as pods in particular namespaces. And so the cluster agent takes all that information and then it reports it to a controller where all of the metrics are baseline. So you can use that information to create proactive alerts. It will also inform out of the box dashboard that gives you visuals as to the overall status and health of your cluster, which I'll demo in a second. So when you deploy our cluster agent, you deploy it using the operator framework. So basically you start with downloading our cluster agent from our download site. And then there's a few very simple steps that you would perform to deploy it, which are the same across any Kubernetes distribution. So the first is that you deploy the operator. And then that operator will create custom resource types for the cluster agent as well as Infravis. So based on the cluster agent type, you also have another cluster agent YAML. You do some very simple configuration, point to the image that you want to use for the cluster agent application and go ahead and deploy it. You can also deploy through the Infravis custom resource type, our server visibility or network agents that will give you deeper insight into the nodes and the resources on the nodes as well as network traffic. On the left, we're highlighting the different options that are available if you want to instrument your applications in a Kubernetes cluster. So the simplest on the bottom left is you can essentially bake our agent into your application image. So that when you deploy that application image, it has all the dependencies in the agent runs in the application process and reports to our controller. The top left is based on the init container approach. And this is something that is supported in Kubernetes only. But it is very flexible because you can build your app image without any agent dependencies. And then at deploy time, in your deployment spec, you specify an init container image. And that image you could build or leverage one of the ones we have on GitHub, for example, or Docker Hub rather, that has only the agent binaries and dependencies. And so at deploy time, those dependencies will get copied into the app container. And so the init container will run for a short while to perform this copy, and then it will terminate. So the end result is you have an app container running in Kubernetes that's been instrumented, and you have a flexibility or separation of concerns where you don't have to manage the dependencies directly in the image as you would do in that initial approach. And then we're excited to announce that as of a few weeks ago, we have a third option which provides the most flexibility and the simplest kind of day one, day two operation experience. We call that auto instrumentation. And this initially is supported for Java in Java applications running in Kubernetes. And what happens is that the cluster agent will leverage the init container approach. And it will automatically apply it to your applications running in your cluster. And so the benefits of that is that you can manage the agent version in a single place. And you can manage essentially instrumentation in a single place rather than having to replicate that configuration across your various deployment specs for your applications. And then this makes it very simple when it comes time to update your agent image. You can do that again in a single place. So day two operations are much simpler. And the end result is you have all of your applications are the ones that you can whitelist essentially, automatically instrumented very little effort. So that's a great new feature initially for Java. And then we'll be working on .NET Core as our next item and then after that Node.js. So with that, I'm going to go ahead and do a demo of our cluster agent. So starting with our cluster is dashboard. So here we see a list of clusters where we have a cluster agent running in each cluster. I can go ahead and dive into a particular cluster. And then I get our main dashboard. This will on the top summarize and categorize the different kinds of events that are occurring, whether there's errors going on, eviction events that might have to do with a lack of resources to deploy additional applications that the scheduler is then having to evict applications. We also have in the center a categorization of the pods that would phase there in and then top right overall physical capacity of our cluster. And then in the middle, a little bit more detail about particular types of errors that are occurring. So for example, we see that there is image related issues that we'll show. And then in the bottom sections, we have a lot of useful information that helps us with capacity decisions. So for example, under utilization, I can see the overall requests and limits that have been defined. And those are really ways within Kubernetes to assign quotas to my applications. And I can see that my use percent of CPU is very low compared to the requests and limits that have been defined. This might be an opportunity, for example, for me to lower some of those limits and requests and make room for additional capacity for other applications to be deployed. And then the bottom right is showing for the quotas the limits and requests that have been defined to what degree have they been utilized. We also have an event view. So from the events, I am seeing that there are two categories of events that we're showing here. The first is related to health rules. So I mentioned that we baseline all of the information that we get from our cluster agent. I can create health rules based on that. So that would show here. But I also get I can filter those out and just show the cluster events. So in this case, I'm seeing the same events that I would see with kube control. So if I said kube control events from the CLI, I would see the same information. So here I can see there is a particular issue with image pull back off for a particular app. I can open this up. I can see here is my particular deployment that's having the issue or the pod. I can see its image pull back off if I drill down into that. And I go to our pods dashboard. I can filter on that particular pod and get a bit more detail about what's going on. So this shows me a status of the events associated with that pod. I can see that there's a particular message associated with it. And then this one isn't currently showing it, but it would show me a more specific event about which image was the problem. Also shows me the host name of the node that's running as well. So we're in the pods dashboard now. I could highlight a few cases where you might be in a troubleshooting workflow and say you notice that a particular application, in this case called offers profile, is restarting often. So you can see here we have a v1 and a v2. And the v2 is restarting many times compared to the v1. So there's some kind of problem here. Let's open up this pod, take a look. We see that there are recent events associated with it. And we also see this APM correlation option. So what this means really is that we have an APM agent running inside the container that's part of this pod. And because of that, our cluster agent can say, OK, we have correlation. We can show additional views here that are made available by the APM agent. So if I go ahead and say link to node dashboard, now I'm looking at our APM view. And this is showing me exactly what is going on within that container. It's essentially this profile service here. I can see that the interactions that it's involved with, based on our tracing capability, I can see there's a web tier making a call into that container. If it was making any calls out to databases or downstream services, I would see that here. And then from the troubleshooting perspective, I can go and take a look at the memory usage. And if I open the window up a little bit here, I can see that there is an issue with memory being consumed in the heap that is going all the way to 100% repeating, causing this to crash. So this is the root cause, in this case, of this particular pod slash container restarted. So if we go back to our pod dashboard, I can highlight another troubleshooting scenario. So if I start in pods and go ahead and look for another particular app. In this case, we again have a V1 versus V2 and 3. And I can see that V2 and V3 are exhibiting heightened CPU usage. So if I choose our V3 version, I can see that I have that same APM correlation. So our APM agent is running inside of this particular pod slash container. I can jump over and see the interaction. So in this case, the Web Tier 6 is the name of the container from the APM perspective. We can see that it is interacting with two other services. And then I can also take advantage of our snapshot capability, which is our APM agent's ability to get more detailed diagnostics from a call graph perspective when it notices there is something going on that's impacting performance. So if I go ahead and look for particular snapshots and find open one up, I can see that here's an example of a snapshot where there's a lot of time that's being spent in this particular method called hog CPU. So that is likely our root cause for this problem that we're seeing on the cluster side in terms of additional CPU usage. And we can see there's a lot of threads that are sleeping as a result of that. So the last thing I wanted to show was our inventory dashboard. So if I jump back again to our particular cluster and open up the inventory view, we can see that on the right, we are able to track the different Kubernetes objects that are running in our cluster, what kind of state they're in. We can see the namespaces that we're monitoring or have white listed. And then on the left, we can see to what degree have we implemented best practices like readiness and liveness probes. We see that they're non-implemented. Whether they're, for example, is a pod running without a service that could be an issue and things that we would want to be aware of and potentially work with our developers to address. So that's the end of my demo. I'm going to turn it back to Vikram now to finish out our session. Thanks.