 training a machine learning model can be time-consuming. To speed up the process, we can leverage GPUs. However, configuring GPU hardware can be time-consuming as well. So, how can we speed up the model training and overall process? By using G-Lab SAS GPU Runners in our continuous integration and delivery pipelines. Let me show you how this capability can significantly accelerate machine learning training and enable faster iterations in the experimentation process. Let's take this notebook as an example and execute each one of the cells. My task is to predict income per year using the census dataset. Here I will start the training of a model based on XGBoost. Let's run this cell. We see the optimization process has started and I can see the kernel is busy running this task. This can take a long time and I don't want to wait that long and have my computer busy with the funds going full blast while this model trains. So, what can I do? I will leverage G-Lab and its machine learning friendly capabilities. So, here in this issue that was created to track the progress of the model training task, I will create a merge request to add the respective changes. I will use the Web IDE to make those changes to the model. I previously pushed a refactor version of the Jupyter notebook we saw before and I will use it to configure XGBoost to use my G-Lab GPU SAS Runners. All I have to do is to pass the GPU HIST value to this parameter and that's all. Good. I commit these changes and let the training begin. Since I'm using G-Lab CI, it has been configured to automatically start a pipeline in the merge request. This pipeline right now trains the model and will notify back different model metrics. Let's open the training job while it executes. We see the hyperparameter search and training is running and all of this is happening using GPU. These are the specs of the graphics card the G-Lab Runner is using. And here this tag indicates me this is the family of runners chosen for this training job. All right, the job has finished. Let's scroll down and it took almost 37 seconds to run this script. This is a significant improvement with respect to my previous local approach. Let's go back to the merge request page and we see that the pipeline has finished and on top of the faster training it is also notifying me about different training metrics and it displays the best hyperparameter found right here in the merge request view. This makes collaboration easier allowing us to bring other folks to review this merge request and the model. Okay, let's say I am happy with these results. I'm going to merge this code to the main branch. If you made it this far on this demo you might be wondering how was all of this done? Good question. Let me show you how. In the G-Lab project I define a CI YAML file where for this demo I have specified three jobs. One that is in charge of building the container image used to run the XGBoost code, the training job and the notification one. The building job takes the docker file definition in the root of this project and uses it to build an image that is automatically pushed to the G-Lab container registry. The docker file takes as a baseline an NVIDIA image with CUDA drivers installed and the required dependencies to run the model. Now, as per the training part, this one pulls the image built in the previous job and all you have to do to get a runner with GPU hardware is to add this tag. With this, the drivers and the GPU enable hardware meet and give you the compute power needed to speed up model training. All right, let's take a look at this in a more visual way. Let's visualize this pipeline and wrap up the steps. First, G-Lab builds the docker image with GPU drivers upon completion that image is taken by a GPU enabled hardware to run the training job. And then when the training job finishes, different reports or metrics are automatically published to the merge request view, enabling this way a collaborative development workflow in machine learning projects. Easy. You can start using GPU runners today. Click on the video description below for more information and access to this project template.