 Let's have a minute of 10.30, I guess the screen starts and the detection is working. Thank you, Mike. Hello everybody. Good morning. Thanks for coming to this early first talk of this Dab Room. It's the second year of the backup and recovery Dab Room here in the FOSDEM, so I'm very pleased to be here again for a second time. Today I'm going to talk about past, the present and the future of the DLM project. So, just let this start. My name is DiDaco Livero. I'm the co-owner and co-founder of Brain Updateers. Brain Updateers is a small company, consultancy company, that is based on providing solutions on top of open source projects. So, I'm also a co-founder and a maintainer of the DLM project. We're going to talk in this talk. And I've contributed a few with Relax and Recovery. Gracion is going to talk and Johans after me. And a few ports to config to HTML project. So, these are the original DLM team. We have new people in the project now, Netflix, and a few contributions from Johan in the room also today. So, let's start with the agenda of the talk. We're going to talk a little bit about the past. What's the origin of the project? Very fast. Don't worry about that. It's too early today. Then, what's the present of the project? What is DLM and what is coming and what is in the last release version? And what about the future of the project? So, let's start with the past. Well, the past starts with the requirement of one of our biggest customers that need a proper solution for Linux systems and disaster recovery. So, it's a pharma industry and needs a lot of stuff and validation for disaster recovery of the systems. They proved and test a lot of tools for disaster recovery and every try failed. So, as tasks and we propose to use relax and recover and develop a management layer on top of that for them because they needed some management solution on top of that for the data centers. So, this started at summer of 2013 and at the end of the year we had the first really used version of DLM for them and that was the very beginning. So, to see what's today the real use of the project for them I'm not going to talk because we don't have time today for that but you can check these slides at what the Grifuls Global Manager for Unix systems say about the project and what is the project for them at the company. So, let's pass just a small time table about what's from the beginning to today the last version. So, let's start talking about the present of the project. First of all, I hopefully agree that everybody knows more or less about this relax and recover so we're going to talk later about that and DLM is just a layer on top of that for management. Central management. So, let's talk a little bit about some of the features that DLM adds on top of relax and recover that it's automatic error reporting. So, sorry. If any backup fails it's going to be able to open a ticket or report another for your monitoring systems and can be extended with any monitoring system. It's going to be able to perform an intended installation of relax and recover and all these dependencies and all the base configuration stuff from the DLM server so you are going to just say add client, install client and it's going to do all the boring stuff for you and also it's going to, it has a backup schedule you can schedule some tasks on that and it's going to perform the backups weekly, monthly, daily, hourly, whatever you want. So, and there is another important feature that is going to be able, you are going to be able to export an import just to recover images so you are going to be able to export an import from the same server but you can export an image from one DLM server in the data center A and import in the data center B and whatever it is and you can recover that system on that. So, what's DLM? Mainly DLM is a disaster recovery solution so you are going to be able to recover a system fully from network. So, whatever is a virtual machine, hardware you are going to be able to recover the system. So, it's mainly task is disaster recovery but the years passed and we used a lot we are not just the developers, we are also users of the project so we found that it's a very interesting tool for deploying new systems so we are able to deploy new systems with a base image of a previously installed system what is called a template or golden image we can have different images of our base configs, base systems and we can import adding a new client and deploy that operating system so it's a good way to use for that and also it's a very interesting tool for performing migrations why? Because it's a vendor agnostic solution so you can migrate from different virtual systems virtualization systems from VMware to Chen to KBM, whatever because it's vendor agnostic you don't need to use tools linked to the vendor there are tools to do that but mostly can fail depending on where you are coming and the destination of the system so this is a good tool to do that we are going to be able to swap old hardware to new hardware very fast also and also if you want to roll back from a virtualization that it's not working properly and you need to go back to physical it's easy to do that with your LAN you just need to boot from network again on the system and recover that, it's very fast and you can say, well, there are other tools to do these migrations there are other tools still for doing operating system deployments good tools, yeah why to use the LAN for that? well, anybody in the room does testing of their backup systems test it every year, every six months do you test it? the recovery? more or less, I'm not your boss so you don't need to lie to me well, with your LAN when you are migrating your new systems while you are installing new systems on your company you are going to test your disaster recovery solution so while you are using daily for these kind of tasks you are going to be sure that hopefully never happens but in case of disaster you are going to be sure that your disaster recovery solution works because you use it daily with your systems your new hardware that it's going to be installed the first time you are going to have few problems maybe but you are going to solve them at installation time there is no worry on that but there is a worry when we need to recover a system in case of disaster because you need to go online again so if you use it daily with these migrations new system deployments you are going to be sure that the day this system that you installed few times on that hardware using the LAN you are going to be able to recover fast so that's the point to use the LAN for that so talking about the present we are going to see few things few highlights what the new version 2.3.1 now it has now we replaced the API that was backed by Apache server we have a small goal line binary that is doing the same thing Apache was a lot of pain for its release of new version on the edge distribution so now it's just working what needs to work and working flawlessly so no problems on that we added some better improved the command line interface with more information so now we are going to see if the client is online or offline just listing the DLM that clients that you have in the database we had some support for the new distributions the Debian 10 and we also showed when you release backups we are going to see the duration of the backup in the command in the output and also the backup size it's important because if you see a very small backup or a very short time backup maybe it's a sign that something in the configuration is wrong so it's fast to see if something is wrong with your backup maybe this doesn't have an error but at the configuration it's just keeping some files small files and not the whole system so it's good to have this information in the command line so more or less these are the outputs of what I'm saying it's knowing showing the rear version and the client version if the backups are scheduled along for the client and the green and red colors it's saying that the client is online or not if it's not online you are not going to be able to do backups etc and in the backup size we are going to see if the this is a small backup maybe check it if it's okay or not if it's a small server it could be okay but these parameters can be adjusted in the configuration so if you know that your backups are small you can adjust that in the configuration and it's not going to say you're warning it's okay by default and the duration of the backups so that's what we have today on the VLM command line interface and a few other things that we have been improving last six months and also we had a contribution from a guy that tested the project using it in his company and did a pull request to run the DLM on top of a Docker container so it's just on developed branch yet it's working but we do not recommend for production because it needs some host networking stuff because the DLM version needs some things that it's not container really but it's good to have a laptop you offer a consultant and you need to deploy lots of systems it's good to have a Docker image with the DLM just a start do whatever you need to do and stop so your laptop is clean and you have your installation with your golden images et cetera on your laptop to use that so it's a good point for use so I have a demo we don't have time today to show the demos that are published on a cinema about how the DLM version 2 works just check it it's on the first event also in the link all the links for all the demos of the today talk but we don't have time to show you today maybe we can talk about it later or check it later, okay so now let's talk about the future of the project well, version 3 what's going on well, we used the first time we needed to be very fast to release the first version of the DLM and we used the relax and recover framework for that it's a pretty amazing framework but for our use has some limitations so we thought that we need to go further and what's coming and what we want for the future of the project and we are going to we are doing a complete rewrite of the project with Golang why Golang? because it's multi-architecture so we can have binaries for different systems not just Linux et cetera so it's going to be a modular design of the new version the storage backend is going to be S3 compatible it's a Minio the database is going to be MariaDB from SQLite and we don't know it's a modular design it's going to have a plugin system the agent is going to manage some plugins the IPI is going to communicate over CRPC the client is going to be in your laptop or whatever workstation you have and you are going to connect to the core on your network different cores are going to be able to manage so it's going to be more flexible and more powerful for that and even we don't know maybe it's just not going to do just a recovery for Linux it's going to be able to do more because you are going to be able to run the agents for Windows, macOS, Aax, HPX whatever you want we can start this Golang binary on dot and if we have a plugin for that it's good so we don't know we are just going to release version 3 first with relax and recover as it is now it's our point and our target now but we know that it's going to be ready for the future and also it's going to be able to be deployed on Kubernetes and we have in mind that this architecture is going to work to do backups of Kubernetes systems Kubernetes databases so that's the point of the project also in the new version so what we have until now for now we have a functional DRLM CPL command that is going to be our command line interface for the project we are able to deploy DRLM cores on different systems from the command line interface we are now being able to deploy agents in the network that are going to communicate with the core etc and we have a base plugin system that is working and it's doing a small backup installing on the storage backend today we have that and we also have a testing environment to test all the things that we have and another development environment so what is important for us is that all the communication on the development we are developing all these on top of TLS with all the components are going to be communicating with encrypted communication this is the most important thing that we thought if we start developing with encryption if we have problems we are going to solve problems the first time not ok that was working without encryption and now it's failing what happens if it's going to be released it's going to be released with TLS and now we drop the option I think in the development environment to not check with TLS it's by default with TLS let me explain a little bit DR3D is the development testing and building tool of DRLM it's a Github repo with few Docker images and compose configurations that it's going to be able to deploy the complete environment based environment of DRLM with this repo so just cloning building all the Docker containers and starting with your configuration of your Github account previously forked repo of DRLM core common plugins etc they're going to have a complete development environment on top of Docker and the development box it's also a container with BIM and VS Code editors in place with all the plugins they need to develop on the project so you don't need to install Golan, libraries, packages anything on your laptop or on your workstation it's going to be persistent in this repo but it's going to be clean built in your laptop so everybody is going to contribute to the project with the same libraries, same versions everything is the same so if you have a problem, we have a problem everybody is working on the same on top of the same platform, on the same environment so this is good because everything is going to be the exact every pull request is going to be tested on the same environment etc etc same versions, same libraries so that's the point that it's going to be easier to contribute to the new version that the version 2 of DRLM but well, that's the point the experience that we had previously that why we offer this today for the people that is going to use, test or develop on top of that so we also have some some demos of of this development environment and the testing of the project what it has, the version 3 now are posted here in the event page we don't have time to show today, but you can check it and you can see the whole testing and how it's working DRLM now and how to contribute and test it with the development environment and repo just there for you to check so the future of the project we are going to keep developing the two versions, the version 2 and the version 3 the version 3 is going to have changes but not the big changes that we are going to have on the version 3 maybe some things that we are going to release for develop for 3 that are compatible with version 2 are going to be ported to that and we are not doing an end of life of the version 2 until the version 3 is going to be really production really ok so and maybe for years it's going to be useful for some situations this version 2 of DRLM but we are looking on the future we are going to put all our efforts to develop the new version while maintaining the version 2 testing and developing whatever needs for the new versions of the distribution of Linux etc etc or most effort is going to be a version 3 but we maybe have some news there on the version 2 if some sponsor development comes this year but anyway depends on the customers that need some things that are not in roadmap we are going to develop more things or not on the version 2 but all news and all the interesting really interesting stuff is going to come on the new version ok so hopefully you use it, you test it or you want to contribute to the project so it's easy to do you can come to our github repost open issues test it and report whatever you have problems discussing new ideas to the project we always appreciate that so we are open to new proposals and if you use it or you are going to use it and enjoy it please share your experience so more people are going to be able to use that so it's time for Q&A because we have small time for that and if you missed something because I passed slides very fast so if you want to check it we have I think more 8 minutes more or less to talk about the project here are the slides of the of the event and all the information of the event today here the experience of a real customer in production a big customer with ERLM it's what they say about the project two years ago no it's not me so you need to trust me I'm glad to answer anything well the question was if we are going to use Postgrease as a database for for ERLM we don't know for now we just have a target to use MariaDV instead of SQLite that we had on version 2 to be able to have a high availability for one of the back end etc and we don't know it's just we have a connector now to a network database and could be done in Postgrease maybe or whatever other database we think but for now we are focused on have a working version so we are focused on those specifications that is MariaDV Minio for the storage back end etc but we don't know it's going to be possible so we maybe did choose you whatever you want maybe in the situation it's better for use Postgrease and this other situation it's better to use MariaDV whatever it's possible and could be done yeah the question is how the database is going to to grow nowadays for CLM we only have a very small database that it's just keeping the reference of the backups the images etc so we we have a very small database and hopefully the new one is going to be it's going to be bigger but not as big as we can expect for that we expect that it's going to be a small database and the data is going to be compressed encrypt on the storage back end so we don't think that it's going to be a problem to have a huge database because we have a very small database now so it's just for keeping the clients reference the backup reference the jobs that we have scheduled but it's really really small database so we don't mind on that for now any other question well nowadays the RLM is a service for managing distal recovery on linux the question was what's the link between Kubernetes because I mentioned Kubernetes on the RLM well last years I started to use Kubernetes the new way of containers with Docker etc etc so at every conference I assisted or at every talk and they asked for database backup people said ok put this outside of Kubernetes why? because we don't have a proper solution to recover the data inside that that's the the real situation now all the backup solutions nowadays for Kubernetes etc whatever container is written are meant for snapshotting the underlying these goods for static data but with the open backup open database it's going to be a mess we are not going to be able to recover the database consistently so the point with RLM version 3 and Kubernetes is going to be able to have an agent it's going to be able to run inside these spots sharing the IP address sharing the volumes so we are going to be able to run tasks on that agent using a plugin that database inside Kubernetes and perform a proper backup on top of that and a proper restore if you want without being to redeploy the gulpot etc etc just recovering and backup and recovering your data what was always done until now with the containers so that's more or less the link that we thought that we need to change a little bit the concept of what the RLM was because the future is going to be containers and orchestrator etc and it's there is a still not a proper solution for backup and just recovery inside that so that's the point that we thought to see further in the future and start now rethinking the whole thing now as I said we started with version the version 1 and version 2 still has the relax and recover framework that was amazing for us but we thought that in the future we are going to have some problems to implement we've seen that it's difficult to container as the RLM because the framework and the linking with DHCPs etc etc now it's going to be a service and the agent the agent system is going to be there are going to be server agents and client agents so a server agent is going to be DHCP, TFTP, whatever and it's going to be able to be deployed as a container also and linked using the communication inside the RLM agent system so it's going to be easier to deploy that and extend on top of orchestrator on platform and today's cloud services so that's the point and more or less the link that why we think it's all the RLM stuff so time's up, thanks for coming and hopefully you enjoy the next talks today and this that room, thank you very much