 Okay, yeah. Hi everyone again. So I'm Elias Mima from Technical University of Vienna where we have roughly 27,000 students and 5,000 employees and we are in science and engineering university. And yet today I want to present you some lessons learned when deploying Jupiter perhaps for lectures, for teaching. So maybe a bit about the project. So in the beginning, as I said in the introduction before, I'm working in a thing called data lab where we want to support our researchers and lecturers with data science and machine learning workflows. So quite early they came up the need for Jupiter notebooks and also for accelerators GPUs in Jupiter notebooks. But when I built up the first system, soon it was asked if they can use it in lectures and in teaching. And so my focus then over the last year was supporting several lectures with Jupiter notebooks, Jupiter hubs. And last semester, we had around 850 students and this semester we have a few more lectures but less students, 650 and yeah, but this semester the requirements are a bit harder and saying so, yeah, the requirements are really diverse. So we have small lectures with 20 students doing Earth observation with large data sets and a need for notebooks with a lot of memory. And on the other hand, we have this really large beginners programming courses with 200 students doing simple Python programming and therefore we can go with one gigabyte of memory as an example. Yeah, so one thing to solve was authentication first. So I don't know, in Austria we have lots of ISO solutions and not really a good solution for the authentication system providing for new services. And so I came up with the solution of using the learning platform, the learning management system, Moodle in our case for the authentication method and I was somehow adapting and already their authenticator based on to use this LTI 1.3 interface that Moodle and Canvas have. Yeah, another thing was that as I already told the lectures are quite diverse. So the software stack they use is so some are coming from technical chemistry, needing special packages, others doing machine learning, deep learning, whatever. So I decided to provide them in the current phase with individual software stacks for each lecture. And as it's for teaching and exercises, we needed some method for grading and doing the exchange of the exercises. So we are going with MB grader right now, but yeah, I will tell you more about this. And of course, yeah, persistence of the data was important. So we have shared folders and town folders and optionally group folders that we persist. Yeah, some features, optional features we use is a virtual desktop for providing lectures with which have the demand for some GUI tools with an VNC browser based VNC solution. And right now we have a lecture where they do some C++ programming. So we also gave them an IDE, VS code like IDE called TEA in our setup. Yeah, we all run this on Kubernetes based on Magnum on our open stack cluster, which StackHPC was helping us to build up. Yeah, I will come to the details later, but we were using syndrome as a backend for the PVs for Kubernetes based by SAF. We were also using Manila and now we are using SAFFS directly for the localizing we are using Octavia. Yeah, so we are running everything right now on virtual machines, no bare metal machines, and I'm using zero to two to half. So this is Helm charts for the Kubernetes cluster where we additionally create resources in the cluster with some Ansible scripts. And yeah, the main effort really went into building the individual images with the individual software stack for each lecture. So we have quite elaborate CICT setup already for this. Yeah, images are based on Docker stacks. We do some image scanning since this semester. And yeah, we are running everything on our own registry for speeding up everything. So yeah, as I said, we are using Docker stacks. And if you have a collaborative Jupyter environment where you have shared folders, you have to have users inside the containers. And that is not really a problem. If you run the containers as root, then you can somehow inject new users at the startup of the container, but running the containers as root doesn't really feel safe. So one thing I want to try out next is to bring somehow held up into the containers so that I can run the containers without root access. Yeah, another problem right now is if you're doing collaborative workflows with Jupyter Hub is that there is no real way to get groups right now into the Jupyter Hub environment. And I'm working on enabling it, but yeah, still not ready. And you have the pull request here for this one. Yeah, but now back to the open stack problems I had. It was mainly storage issues. So first, I was naive and I used for the homes for each home as in the backed volume. This leads to quite a lot of volumes. And it was really hard to back up because I didn't find a good solution to back up block storage PVCs in communities so that you can track all the users to whom this PVC belongs. Yeah, then also this lead to some problems in Cinder. So startup times increased and we also had a problem in the color setup that there was some timeouts in the Apache used for the Cinder container. And yeah, so I decided to switch over to some shared file system. And first I was using Manila generic servers with NFS because it was quite easy to set up. But soon I run into another problem that I had. I used the word IO driver for storage. So was only capable of running 30 shares per Manila share server. And after that, creating new PVCs failed. So that was not a good thing. Also resizing didn't really work for me. We're running out of time. Okay, we speed up. So I decided to use CFFS instead and because it's quite easy to integrate it in Kubernetes and easy to back up as well. And also I could design the structure of the homes and the shares, the folders myself. So it's easy to reuse maybe giving users access to the files in an own cloud or whatever way. Yeah. And maybe a few things that we are currently developing. So we want to have more fine-grained monitoring. GPU support is still a question, asked a lot from our researchers and lecturers. And yeah, we had quite a few problems with the grading service, this NP grader. And right now we are developing a new grading service based on this NP grader that should fix a lot of these issues. And when we are ready, we will release it as open source. And yeah, of course, there's always, there's still the plan to have Jupyter Hubs for research. And my last point is we are as always hiring. And if someone is interested or someone can recommend some capable of open stack engineers. So please, you're welcome to do so. So I am finished. Thank you.