 sound should be also there on the live stream. So we have to pass some parameters options to the plug-in code. This is done via the Barreos configuration. Then the Barreos core code calls function inside the Python code on defined events. And the backup process can modify Barreos variables. And in the component, also in the file-demon configuration, the plug-in usage must be explicitly enabled, like shown here. This is an example of what can be done with a directory plug-in. It's useful for monitoring purposes, as there is an existing plug-in, which is also on GitHub, which can be used to send performance data to Nagios or Isingar. And this is an example of how it's configured. These are variables that can be read from the directory. So now for the FD plug-ins, the package to be installed is called Barreos file-demon-python plug-in. And like shown here, it must be enabled. These lines are in the default configuration that comes with the package. They are commented out. Then we have to restart the Barreos file-demon. And the statement plug-in names can be omitted. But as there can be also other plug-ins, like, for example, the B-pipe plug-in that was mentioned before, then it will load all plug-ins found in the directory user-lib 64 Barreos plug-ins. So with plug-in names like Python, it will only load the Python FD, not as own library. As we, in the configuration, it is determined which Python plug-in gets used in the end. So why do the plug-ins are possible? And this is defined in the file-set resource. So here we define which Python codes gets used in the end. And we can add custom parameters that are passed to the Python code. The two plug-in types of file-demon plug-ins, command plug-ins, and option plug-ins will be explained a bit later. So this is a complete example of how to configure the use of a Python plug-in. As I said, it's defined in the file-set, where you normally say file equals slash or something. There we define instead this plug-in line, which then determines which plug-in will be run. And this is a job example which will use this file-set. In this example, it's a simple demo plug-in, which is called Barrios FD local file-set. It just processes a list of files and backs up those files. It's a good example for learning how the Python plug-in in Barrios works. And this would be an example of the result of such a job. This is an example of how to run a restore. The file-set must be selected. And then we also have a list of files that we can mark for restore and restore. So now I want to explain a bit more how the plug-ins work. So as we've seen, there is a library. This is written in C, the pythonfd.so. When we use a Python plug-in, it instantiates a new Python interpreter. It extends the search path for Python code with the path that we give here with module path. And it imports the module that we've given by module name. That determines which plug-in is run in the end. Then yeah, and this is the entry point which must be available, the callback functions, which are. It works like that, that the C part calls Python functions. We also call them hook functions. There are also a lot of constants defined which must be used within the Python code. We'll see an example a bit later. Then the next step is that it calls the load Barrios plug-in in the Python plug-in code. And then it calls pass plug-in definition, which then passes the plug-in string which we pass in with our custom parameters. Yeah, then the backup starts. And the difference between command and option plug-ins is that it's a different processing loop that we come to in the next slides. This is an example for a command plug-in configuration. The plug-in string is defined in the include section of the file set. And compared to that option plug-in definition, the plug-in string is defined in the options section of the file set. And yeah, we come to the difference a bit later. That makes not much sense here. The major difference is between the command and option plug-ins that the command plug-in determines what is being backed up itself. So it also must handle differential and incremental while an option plug-in gets the list of files to be backed up from the director. So in that case, you must define the file equals in the file set which is then sent to the plug-in. So the most what happens then there are some now to some more callback hook functions that are called within the Python code. When a backup runs for each file, it invokes the start backup file function in Python. The context parameter is always passed. And this context must also be passed back when we call functions in the barrier score. And the save packet is a data structure which must be filled with some data for each file that's being backed up. And one of the most important function is the plug-in.io function, which handles all the I.O. operation. If it's a simple file backup, it must, in the case of a backup, it must open the file that's being backed up. Then the plug-in.io gets called repeatedly until the file is backed up. And in the end, it must close the file. And when the file backup is done, the end backup file is called. And there it's also determined if more files are going to be backed up or not by the return code of the end backup file function. And with command plugins, the handle backup file function is not called. That's only called in the option plug-in. I think I will skip over this. It's mostly the same. So these are functions that do things that call code in the barrier score code, which is written in C. So the most important ones are the job message function, which can generate information, error, or warning messages that are passed back to the director, so that those then appear also in the job log. And then there are debug messages, which are important when developing a Python plug-in because that's the only way to show debug messages. But those do not appear in the job logs. They only appear when we run the file demand in debug mode in the foreground. And we have the getValue, which is used to get values that are set from the director to read them into the Python code. While the code can be monolithic, it makes more sense to use inheritance to reuse existing code. And the barriersFileDemonPythonPlugin package provides a base class that defines all the functions that are possible, which is useful to inherit from. And we have the BarreosFDA of RepaPy, which is a wrapper code defining all the functions that are called from in the plug-in. Here we have an example of how to run the BarreosFD in the foreground, so that we see the debug messages at level 100 or lower. How much time do I have? OK. It's a lot of stuff, so sorry. Return codes from constants. Yeah, so to get started with it, it's a good idea to look at one of the existing plug-ins and read the code and to understand how it works. The base class defines, as I already said, the most important stuff. Yeah, for the restore case, for example, if you want to restore files, you must also take care to create the intermediate directories like this example shows. And yeah, one of the most important functions is the plug-in.io functions, which is called with different I.O. operation types. And at the beginning, it's called with either I.O. Open. In the beginning with I.O. Open, but there are also I.O. flags, which can be used to determine if it's backup or restore that we are currently running. It can also be used in that place because we not only can backup the files with the plug-in, but we can call other functions or other tools to, for example, dump a database and get a data stream back. It must not necessarily be a file. And then it's called with I.O. read or I.O. write. I.O. read in the backup case, I.O. write in the restore case repeatedly, and in each of the call, the plug-in code must fill a buffer variable until the end of data is reached. And here I have mentioned some examples that are good point to study if you want to start using Python plug-ins. Another existing plug-in is what is written in Python is MySQL plug-in, which works like where the common way of backing up a database is to run a prescript which runs MySQL dump or PostgreSQL dump to the local file system and then use a normal backup to backup the dump file. But if it's a larger database, we probably don't want to have the dump file on the local file system, but rather send the data stream directly to the backup server. And this plug-in has the advantage that it automatically determines the list of databases in the database server and backs up each database separately. It has some options that I won't explain in detail here. And there is also another very interesting plug-in now that is written in Python, which makes use of Bacona extra backup, which enables incremental backups of MySQL databases and also point-in-time recovery. So if you want to get started with development of Python plug-ins, it's a good idea to set up a VM for development. We have instructions of how to set up barriers in the documentation. There are also, in Susie Studio, we created a pre-configured appliance. I like Vagrant and KVM, so I have created a small Vagrant file which you can find here to easily get installation up and running. And here's how you would do it manually. I don't think we need to look at that in detail. There's also an example of how to run a backup. And in the end, if we have something that's left. OK, so we already also have the mware plug-in, which is written in Python that was mentioned in another talk already. We will start soon working on a plug-in which makes use of the backup restore API in Overt. Yeah, we have had a very excellent talk about incremental backups in QEMU. So this is also something that could be interesting for implementing it in using a Python plug-in. Yeah, some other ideas are mentioned here. Yeah, for example, other SQL databases or no SQL databases like MongoDB, I mean, if they are very large, I know there is MongoDump, which may be suitable for smaller MongoDB databases, but for the large ones, obviously not. Yeah, what is there in Docker to backup? I don't know. Maybe you have an idea. Yeah, I think we have been thinking in a long time about KVM stuff. But I think the most promising, because everybody wants incremental backups, is the recently appearing new stuff in QEMU. So yeah, any questions, ideas, proposals, yes? Is there a plug-in for X-SERV for backup in QEMU? X-SERV, no, there's no plug-in recently. But we are also using X-SERV, and I think it's possible to get at least full backups out of X-SERV via HTTP. So I think we do this with a B-pipe plug-in. It's already possible with a B-pipe plug-in. But it could be also a good idea to implement that in Python, because Python could use the Xen API to get a list of all the virtual machines to backup and do the rest in an automated way, for example. Oh, time is up. OK, I'm around here, so ask me later if you want. Thanks.