 Hi, I'm Sebastian Gruber and I'm presenting the MLR3 hyperband package for multi-fidelity hyperparameter optimization. First of all, what is hyperparameter optimization and why is it important? The performance of a machine learning algorithm often critically depends on suitable hyperparameters. However, it is often not clear from the beginning what hyperparameters lead to the best performance. Furthermore, a lot of malls have a hyperparameter that hugely influences the runtime cost. We call it the budget parameter. In this example figure, we show how we evaluate different configurations with increasing computational budget. We see that performance with little budget is indicative of performance at high budget. That's why it is often possible to estimate the model performance when a cheaper version of the model is available. In this specific case in this figure, we can save a lot of resources by not evaluating the faded out settings. For this, hyperband steps in as an efficient tuning algorithm that makes use of information of low budget evaluations and stops underperforming configurations early. Only two user-defined parameters are required for hyperband, the budget increment factor eta and the maximum budget backup configuration r. Eta also acts as the dividend of the fraction of remaining configurations after each step. Because it is not always the case that bad performance early on equals to bad performance later on, hyperband creates several brackets ranging from an aggressive selection policy to no selection at all. Different amounts of configurations are automatically explored in different brackets. Here is an example with eta equals to 2 and r equals to 6. As we can see, the first bracket starts with 9 configurations while the last one is only allowed to have 3. Let's head over to a tuning problem we want to solve with our package. We want to tune the parameter alpha responsible for the L1 regularization for the XGBoost algorithm. For this, we specified the number of boosting iterations and rounds as the budget parameter by adding the tag budget in the paradox parameter set definition. Next we define the tuning instance. Here we want to fit the task errors by setting XGBoost as classifier and using Holod as resampling method. The tuning method is the classification error and we optimize the hyperparameter of the just defined set. Hyperparameter terminates on its own, so no terminate is required even though one may act as an upper bound. Next we initialize a new tuner with eta equals to 4 and then optimize the previously defined tuning instance. During the run, information about the bracket layout is printed to the console like it can be seen here. Two brackets are evaluated, the first with 4 configurations, the second with only 2. Let's inspect the results. As we can see, the first one is the best of the low budget ones and thus repeatable fire in rounds in run 5 even though run 7 belong to the second bracket turns out to be the final best one. MLR3 hyperband lives in the MLR3 universe and can be used for any learner available within MLR3 learners and also to tune pipelines of MLR3 pipelines. For more information check out our poster and our package on GitHub. Thanks for your attention.