 So, let's find out how to record job metrics. Galaxy has the ability to record metrics about jobs that execute things like how much memory it used, how long it ran for, what type of CPU it ran on, what kernel it ran on. All of these can be useful for you to optimize your workloads. There is some existing work into trying to optimize job submissions, to optimize the amount of memory that's allocated, the number of CPU cores, to make the most efficient allocation possible. For some tools, for example, you can guess at how long it will take based on the input size, because it scales linearly or so. And there is a lot of work that's being done in this area. And if you're interested in helping out, let us know. Note that by default, job metrics are only available to administrative users unless you set exposed potentially sensitive job metrics to true, like usegalaxy.eu does. We're a big proponent of transparency, so we expose all of our job metrics with users, all of the ones that are safe too. So, we'll set up a new template file with a job. Just copy that, OK? We'll set up the template file with some job metrics. Here, we'll collect core. This is just the basic core metrics that Galaxy always collects. We'll also collect the CPU information and the memory information. We'll collect the uname, so that's the output of uname minus a. We'll collect the environment variable. This is the one that is most likely to leak secrets. So if you're exposing potentially sensitive job information, please don't collect the environment variables unless you have specific reasons and specific variables you want to collect. You can collect C groups. So, jobs, certain schedulers run jobs inside C groups, just like SystemD runs Galaxy in a C group. This enables accounting, very detailed accounting of CPU and memory usage for the job. And by using a C group one, you can collect a lot of this detailed memory and CPU information that's very useful for predicting job run times. And lastly, we'll collect the host name metric, just like every other tutorial. We will be adding our groupbars, Galaxy servers, and registering this new configuration file with the Galaxy configuration. All of these should line up, of course. And we will put it in the templates section as well. So down here in the Galaxy config templates, we will copy that over. Again, all of these should line up. Just make sure they all line up within the same variable. And when that's done, we'll run the playbook. So now when we run tools, any tools, even the upload tool, job metrics will be calculated for all of these. You'll be able to see things like the entire environment, what shell, what virtual environment, things like this, what language it was run under, the host name of the server, the memory information. Again, the C groups is the most accurate and the most useful. So you'll see here memory soft limit, eight exabytes. This is just the maximum value because there is no soft limit set for that C group. The max memory usage during the execution of the tool was 659 megabytes for that tool. The out of memory control was enabled, but it didn't activate. Additionally, the memory limit on the C group. This job clearly ran within the Galaxy C group because it has the same 32 gigabyte limit as the Galaxy server itself. So I'm guessing what was ran there was an upload job. We'll do that now and compare. So I'm going to paste in some data again. I'll rerun one of our old tools as well while we're at it. So the upload jobs we haven't sent anywhere special. So they should be running underneath the Galaxy environment. And you'll see, oh, no, no, they run under slurm. Okay, so you'll see a lot of information here. Let's go through it a little bit. The core metrics collected are things like how many cores were allocated, you'll see in testing tool two when we ran the testing tool with that it got allocated to cores and one megabyte of memory. You'll see a lot of environment information. Slurm sets a large number of environment variables that are then collected. And again, some of these are not useful to exposed users or it's a lot of information. And this also fills up your database very quickly. It collects the host name as well as the memory information, the total system memory. Which we can see is about 16 gigabytes. Okay, that's how job metrics work. If you want to collect them, great, go for it. They can give you some more information about how your jobs are running and you can make decisions based on that. As always, be careful about what you collect. You can access them through BioBlend if you want to see the job metrics for a specific job or via SQL with GX Admin, which we'll cover a little bit later in the week. As always, if you have any questions, comments, concerns, please submit the feedback form. Just let us know what you thought of it, even if you don't have questions or comments. Let us know if this was a useful tutorial for you. Thank you so much.