 So, let's give a warm welcome to our next speaker, Xavier, who will be talking about environment modules. Thank you. So, good morning, everyone. It's a real pleasure for me to be there at FOSDEM this year to talk to you about modules. Modules, it's an open-source project that can ease your day-to-day work when you have to deal with terminal shell console. And this project is also referred as environment modules for disambiguation. So, let's introduce myself. My name is Xavier de la Ruel. I am modules project leader since almost two years. I work at CEA, which is a large French research institute. And more specifically at CEA, I work in the high-performance computing division. So, typically, when you work on a console with a shell terminal, you have one initialization file you can use to set up your environment properties. But with the regular facilities provided by shells like bash, KSH, and so on, it's kind of complicated to know what are the stuff that have been configured. And it's also kind of hard if with the same user account, you need to work on various aspects and on various projects. So, here is a quite simple example where you have a production setup that requires you update some of the default environment variable. But if you want to work on a test setup to evaluate new version of a software, for instance, you need to switch from one to then edit your configuration file. It's not much convenient. So, modules, it can help for this use case. It provides a module shell function that updates the current state of your shell, the environment variable, aliases, and so on. It does that by evaluating module files that represents some environment change set. And once a module file is loaded, it can be tracked so you know what is loaded and you can unload them, you can remove the change you have done to your shell. So, a small example here, we want to use an app.w tool. It's not there by default. Bash cannot find it. But by module loading this module file about this software, then, afterward, we get the tool in the path. So, how does it work? First steps, the module files. Module files, they are script written in tickhole that express a change to a user environment. So, you can write full tickle scripting language. And on top of that, we add some specific functions to handle environment stuff. For instance, for the app.w module file, we add to the path variable with the app.w module file command, the specific binary dir for this software. Then, after the module file, we have the module cmd.tcl script. So, this script, it takes a command, it takes a shell, and for this shell, it will produce code that represents the change to apply to your current shell session. So, here in this example, we call the module cmd.tcl script to get a bash output that represents the load of the app.w module file. And the important part among others is we see here the path variable, it is updated with the binary directory for this application. And the last part is the module shell function. And this shell function, it just calls for module cmd.tcl script, and it gets the shell output and then it evaluates the shell output. And by doing so, you update your current shell session. So, that's the basic principles behind all of that. So, when you start to work with module files to update your environment, you can use catalogs to organize these module files and then you can enable these catalogs with the module cmd. Then, afterwards, you are able to search through these catalogs for a certain module file and you can also load, unload this module file by just referring by their relative small name. A module file, you can show what it does so it can express you what are the environment changes that will happen if you try to load it. Here for, we want to see what app X is. So, we see the whole definition for the app X module file. And module file, they can express dependencies between themselves because, for instance, to load, to use an application, you may need to load prior a library. So, that's why app X defines. And when you try to load app X, the module cmd automatically will handle the prerequisites and it will first load lib A to then load app X. And as a resulting environment, we have both module files loaded automatically for you. You don't need to wonder. You can, with module, dump your current environment state. This is called collection. You just module save to a name and there will be a collection of that name that will be registered in your user environment. And this collection, you can show what it does and it is a set of module commands that, when they are applied, restore the previous environment you add. So, then you can switch from one environment to another. For instance, if you want to switch from a production setup to a test setup, let's say you have one production collection and one test collection. So, if you are in a test environment and you restore the prod collection, it will unload all the production module files and then load the test module files to load then afterwards the production module files related setups. If you are a system administrator, modules, it can help you to satisfy your users. When you have to deal with multiple, with one shared system where you provide access to many users because in this situation you always have one group of users who want a software A in a version and a second group of users who want the same software but in another version. So, there is conflicting needs but they access the same resources. So, you cannot use for them the standard installation path. So, modules, it can help you to give access to very large catalog of products. So, if you have a very large number of users, you will end up with a very large number of installation of different versions for the same software and then it applies for hundreds, thousands of software and modules, it can help you to provide access for all of your users to these software catalogs. In this situation, you will also end up with users using different shell. Some will use bash, others will use fish or TCSH. And it can be very hard for you as a system administrator to provide guidelines to your users because, for instance, to update an environment variable, you do not do it the same way for the Bourne family shell than for the C family shell and for fish. But with module, with the module command, it's always a module loader to enable some things. So, you can simplify, you can write shell agnostic guidelines for your users. Currently, modules support most of the common shells out there. SH, bash, KSH, ZSH, all the C shell family and the fish and CMD for Windows. And you can also call for modules from scripting language like Python, Ruby, Tickle or Pearl to get access to your software catalog through your script. You do not see that very well but it represents the readers of the documentation of modules all over the world. Modules, it is quite used out there. In fact, this is a project that is nearly 30 years ago and it is mainly used in the scientific computing world where scientists access supercomputers so they have all access to the same machine. So, module in this field is the standard way for scientists to access larger software catalogs. Currently, the project has a nice development pace. There are multiple releases each year. The software is well integrated in all the Linux distribution. You can also reach it from Ombrou on Mac OSX and it's also available on FreeBSD. If you follow this link to the repology.org site, you can see quickly all the versions available on all OSCs. So, current development trends. The work currently is focused on ways to automatically solve and apply the dependencies between the module files. The idea there is to follow the same principle we can find today in software packaging tools like DNF or APT. There are other very cool stuff to work on like providing cache for module files to speed up the search of thousands of module files. It can be also very interesting to expire module files after a given date. Module files, today they are scripted with written in TCL. It could be quite convenient for newcomers to be able to write these module files in Python because people better know Python today than Tico. And we can think of a lot of new commands like stashing and stash pop environment through collection like you can do when you work with something like Git. So, contribution, they are really welcomed. There are very nice stuff to work on. You can also come to the mailing list with your own ID to share. Good thing to know is that there is a very large non-regression test suite for this software. Currently, approximately, there are 8000 non-regression tests. So it provides a very large, very good coverage level. We are currently at 99% of coverage. And all of that is integrated in continuous integration against Linux distribution, OSX, 3BSD and even Windows. So that's it. Thanks for your attention. And if you have any questions, feel free to shout. So we do have time for one or two questions if you want to ask Saville. Otherwise, you'll be available outside of the room as well. Oh, we have one question. Go for it. The question is, as of today, there is this container space. So what modules can be used for? In fact, containers, they shift a bit the way. And they are very nice if you want to provide a specific setup with one application and one version. But as soon as you need to provide multiple versions of the same software, so you can, of course, handle multiple containers. But if you just want to maintain one container, you can have in this container all your application installed and use module to switch from one version to another. So container can be a way to simplify, in fact, your work. But as soon as you need to provide a complex matrix of version and dependencies, modules can still be there to help even in a container world. Of course. Of course. Even if in this HPC world, there is also a good trend to have support for containers. So, over question? Great. Thank you very much. Thank you. Oh, perfect. Thank you.