 So I'm here to talk about the containerized agent deployment which falls into the ease of use category as far as the enhancements to the product. Really it's a quick and easy way to install the true site capacity agent in a containerized environment. So obviously the key there is the environment has to be containerized. You have to have a container host in order to install the agent in a container. The goal is to simplify and speed the deployment of the true site capacity agent. Just installing it as a container really just takes a matter of minutes. It's delivered as a container image. One of the advantages is it reduces the risk of failed agent deployments. I have a customer right now who is rolling out thousands of agents and especially on windows. They've found that their windows servers are not all configured uniformly, especially differences between the dev environment and production. And so they're trying to deploy the agents and they're getting failures because of differences in system configuration. So that's one of the beauties of containers is the configuration basically is all isolated and wrapped around the agent. So a single Docker command creates a container and starts the agent. That really is very quick and it works equally well in the cloud and in an on-premises data center. So the advantages really are that in the current situation you have to install a host in a traditional way with an installer. And as we've just learned, the GUI installer for the BPA agent has been deprecated, which in most cases shouldn't be a problem because nobody's using X windows anymore. So I almost never run into a customer whose servers are configured to do X11 anymore. But still you have to install the agent and whenever you're installing something, you can run into all kinds of issues with permissions and ownerships and directories and so on. So when you're installing on a host, you have to change the environment to some degree, put in its scripts and change permissions. And then again, if you have a doing a large scale deployment, you can have failure rates if the customer's server configurations are not absolutely uniform. And it's not very cloud friendly. So with the agent available as a container, we solve a lot of the problems. It basically brings its own environment with it and isolates it from the host and from other things running on other containers running on that host. Just one Docker command creates the container and starts the agent, collects the data from the host operating system kernel. So you're running the agent in a way that it still has visibility into the process space on the container host. Reduces the risk of fail deployments and it's equally comfortable in the cloud and in the data center. So at this point, I'm going to go into a demonstration and I will share my desktop. Before I launch into the demonstration, I just wanted to quickly run through the steps involved. I'm not going to do all of them because I've got my container already loaded and running, so I'm not going to run through all these steps. But basically, these are the four steps involved. You've got to, first of all, get the container image. You load the zip image into a local repository using the Docker command. You confirm that the image is there and then this is the key command here, is you use the Docker run command to run the container. So really, this is what the container looks like when it's running. So it's got a container ID and there are certain commands that you're going to need to, if you need to run certain commands. For instance, you want to start a bash shell on the container. You can do that. You need to know the container ID. You can also give the container a name. My container has been running for about nine days. And it has been collecting data through that whole time. So once you've started the container, then you've really got just a standard BPA agent. So the BPA agent is available on my desktop. So if we look at, let's start with, first of all, you've got to add it to your gateway server, which is, right? So you need to add it to the agent list in order to get it added to a manager run so that the data is actually, the data collection is started and we'll start showing up in the workspace. What I did initially was just to get something flowing, I added my Docker agent to the un-catalog system so that I could do an investigate study. But once it is added to a manager run, then you'll see it in the list nodes. And this is my Docker container here. And so the first thing I did was created an investigate study, which is what you're looking at here. So I'm just collecting a number of metrics, CPU utilization, memory, and also process level data, which I can drill down on. This being one of the main advantages of the capacity agent is its visibility into process level data at a very granular level. So you can see I've got all of my process data here in my investigate study and I can sort it. You'll see that the main things, there's not a lot going on on that host. So the main things that are consuming CPU really are the BGS collect process itself, which is our capacity agent data collector, and the Docker daemon. So this is the investigate study and then I also am now, but it's now visible, my Docker agent is now visible in the workspace. So I created a Docker domain to contain my Docker agent. I've got one thing in it, which is my Docker agent. And I've got just one quick graph that I created, which is a performance analysis. And I just selected some key processes from the Docker host to graph. So it's basically CPU utilization for the busiest processes on the box, which are our own processes. Again, these are mostly anything with BGS in front of it is us. And then the Docker daemon and the container daemons related to Docker itself.