 myself also from here we are open for any questions concerns whatever so if you have a question the person who is addressed the question to will repeat the question in the mic and then also the question okay it's open any questions who of you actively uses one of the software pieces that are described today relax and recover whatever so and not you do not count your I mean as an end user not as a developer so and user so and you do not have questions that's that's good news because it means it works yes what should I say if you are satisfied so far but he's not a user should we allow the question okay yeah so the question has been the barriers team has several developers and one developer Marco van Beering has decided to not continue with the project and he has done a lot of contributions before and the question has been will it is slow down or development yet definitely because he has been a good developer but we continue we have our features the most features from barriers 16.2 have been implemented without Marco I would say so this had not dramatically impact so yeah it's it's not nice that these didn't continue and I would prefer it in another way but the barriers can project will of course continue and it is open source and we get also contribution from other sides okay does the center yes the question has been about S3 and rados yes the status about this rados interface is available but as far as I know slow and as far as I know there we haven't received a lot of feedback from from this so it's not in the active focus currently we hoped it will change and maybe some people will participate or add support to it but yeah it's there it does work for others but the performance should be improved and there was some some experimental work done already using lip droplet which is a library to access S3 amongst others but yeah that lip droplet is not not advancing very much and we also did not receive a lot of feedback regarding this so it may be continued in the future but currently we don't know and maybe it's also I think Amazon has also published some some library to access S3 for you for use with CEC++ maybe that would be a better choice to use than droplet. One other thing to S3 a few months ago I've written a blog article by some guy from Norway who has implemented some S3 back-end code and which sounds quite promising but okay I cannot tell you more about this because he haven't contributed it directly to the whole thing. But wasn't that the idea of backing up S3 not backing up two S3? Might be. There are two ways always. We can back up the data that you have in S3 yes that's easier than the other way around because to use S3 as a storage team in back-end that's much more difficult. More questions? Yes there's a question. Another question regarding plugins is the Postgres plugin from the company and it's a little bit work. You can get it run but it's always a bit difficult to get the barrier or sources into the server to run the plugin. Are there plans to do this as a package? Yes I think I would rather... The question was about this there is a contributed plugin for Postgres backup which I think should allow incremental backup and point-in-time recovery but it's hard to get that up and running. And yes we have looked into there's a project called Barman which is made for that but that's a different approach. We would rather prefer approach like we have now with the Percorna extra backup plugin which does a similar thing for Postgres. So that may come in the future yes. So what I have heard from the NTIOS plugin that this is really hard to maintain also and that's also the reason why we haven't spent too much effort in this just getting it roughly working and then have not continued. We have a concept to implement similar functionality with Python but I don't know missing the time to do so. This question I have to repeat because I'm not sure if I had understood it. So the question has been you have a Barman environment for Postgres and you think to integrate it together with Barrios. Yes so I was just planning to completely exclude everything relating to Postgres and the two products without... I don't see issues this is just that these two products have different targets to achieve but in parts they are quite redundant. So it's not a perfect solution to have them both combined or run in parallel but at the moment it's probably the best you can do but if you have collected experiences we would be glad if you share them on the mailing list or give other people advice for it. Okay I also have a question for the guys next to me. Do you have a roadmap for the near future? Yes we have some ideas to implement. One of these is to extend all the rest happy in order to provide the multiple backup methods provided by Johannes in the last release. I think it could be a good approach to use it to back up the Linux containers inside the Linux host with different configuration files in order to back up each container at once with the data in the container and also the configuration files of the container could be a good idea to implement it. Also we want to extend the HTTP API to extend something in real in order to support HTTP URL methods because in the near future we will have machines with UFE 2.5 with HTTP boot and could be a good idea to perform all the backup and boot from HTTP could be good to make backups and recovers over one network and we have also different things that we want to solve regarding incremental backups with relax and recover and we want to make some changes in order to relax and recover could select the backups stored on DRLAM in order to back up a point in time of the incremental backups or choose the DR image that you want to recover from the recovery much of relax and recover. These are some of the things that maybe not in the next release in the future releases or near releases we will have to be done in order to improve the management of the disaster recovery with DRLAM and relax and recover. I don't know what you think because we are the real... Looking forward to the politic first. Of course, of course. That's the nice thing about Open Source, we're working together and we learn from each other and that's the main reason. We do it not only for a pleasure, it's part of business but it's growing and there's a community and there are nice guys who do it for free. Vladimir is one of them who does a very good job the last couple of months for a year. At least he did some very nice things with Windows. I hate it, but he... But he has done it. But he has done it. Those mixed feelings. I hate it, it does not deserve to be restored but you are writing it somehow. But I'm glad you did it. Thank you for that. It's not actually about Windows. Vladimir did. Pat, you can explain it better than me but you actually did so. I do not need to talk inside of you. But if there are not so many big questions about relax and recover, I would like to take the opportunity before Vladimir explains the block clone backup method to tell about what I did before Rea version 2 you could only have one single backup in Rea. The backup and Rea could restore the backup. Now you can have multiple backups. And this is... What I implemented is not the final solution it's a first step to do this and I think it could become of a big importance because as I mentioned in my last talk but perhaps not everybody was there because when you do only one backup and only one restore you have some kind of bottleneck if you have, let's say, one terabyte to backup and restore and it's only one process that does it, it may take a longer time but if you can split up your whole data into several separated parts then you can backup and restore most of them simultaneously and I did tests on virtual machines and even with a single CPU and only one gigabyte of main memory I was 10% faster with two TAR programs running in parallel because of course if one is waiting for the disks the other can unpackage something and on a bigger virtual machine with four CPUs I had what's this word manager or whatever it's called it shows the CPU usage and it was 40% for a single backup and if I restore two in parallel I got of course 80% so I can restore three backups in parallel and I think nothing would have to wait provided the input is fast enough and on a virtual machine basically everything happens in memory so I can rule this out so I can get really fast and this is the main thing of RIA if the system is bigger and you can restore multiple backups you can be much faster as much as a hardware can do so you can fully utilize your hardware and there is no build in limitation in RIA you only need to keep things separated and then do it all in parallel and this is a precondition for what Vladimir implemented to do backups of we call it alien file systems so what is not native for the Linux system to recreate and therefore if you have several partitions one with NTFS, one with fancy FS I don't know and you want to backup both of them you must do it as multiple backups and you can even hopefully probably do it in parallel you said it's already well enough exactly it's not about backing up windows it's about backing up all the other stuff just device file whatever you have in your mind it does not even have to have a file system basically I think everybody of you knows DD so it basically does not care it's working on block level so it doesn't need to know a file system so it rear now basically with DD can do backups of whatever you have whatever file system you have in your mind and NTFS clone was chosen for NTFS partitions because as you might know if you have all these large partitions DD does not know where the end of the partition is until you tell it but it basically runs through all the partitions so your backups can be really huge and NTFS address this very nicely NTFS clone sorry address this very nicely as it knows where the end of the data is on the partition so when I did a testing with unnamed operating system I basically had to work with basic installation which was about 9 gigabytes so it was quite acceptable so now rear can be literally used for backup of everything so if you ask in future whether rear supports something you will get answer that yes using at least using a block colon backup method you can get it to file but it's pretty good to just sort of out of box well the situation is very similar to clonezilla if you take it but clonezilla can it's basically a live system so you need to I don't know if you want to run or if you do not have a dual boot you have to shut it down and do all this stuff and once you have a dual boot you have the Linux already running so why not to use rear for it I'm not saying that it's essential or only rear can do backup of this early and fast it's basically you can do it by yourself it's no prerequisite it's just something that is nice to have you most it's we I have to say you are all fans of rear for good reason I'm also fans of rear but you all have chosen to store your backup to an NFS or to something else using TAR or NFS or do something else what have been your reason not to choose open source backup solution for example Barrios to do the backup well we as I said in the talk it's not a DLM or relax and recover are not data backup replacements it's another choice to have a ready to recover backup from disk using drives and tapes on your backup system and after doing this and storing on NFS on storage you must send it to tape with Barrios or products for backup but if you have lots of backups using of data backups of databases applications etc you want to have the drives free for that and if can when the your drives are available on times that your backups are down also you can then send to tape easily without impact in your shadow windows of your backups I think this is the approach it's not a backup replacement never will be a backup replacement it's an option to manage centrally in an easy manner your disaster recovery infrastructure and using also your standard backup solutions in order to backup the data into tapes if you need it to recover from that and then send to the correct system I can give a very small example why we cannot always use Barrios or some other open source that is also an open source backup solution you probably know in medical environments or in hospitals there are a lot of scanners very expensive scanners I think 80% of the scanners in Europe are recovered if they crash with Rear it's a Linux system the scanners today and they all have disaster recovery which is on DVD so the complete scanner software and operator system is on the DVD delivered to do the disaster recovery if it's required hello it's the US and the software to do the scanning that's a Rear system a Linux system of course but the disaster recovery is delivered with the scanner and that's based on Rear that's one of the reasons because it's very compact and it's very fast so that's one of the reasons that and the other reason why we did not choose an open source backup as the first choice is there are so many options yeah sure you have so much for many of them so it was very difficult to choose of one of them and Tar was always the oldest one I think except of DD maybe and it's a very known product and it's very important all the different Linux distributions it's I don't think in the beginning there were some deviations about the version with Tar but now they're all equal and we don't have any issues anymore with Tar so that's the best choice to have something working very quickly that's why we chose Tar as an internal backup solution but there are many more I think there are so many commercials and open source solutions that are possible so for me it's a bit different call it not really laziness but limited time I must first and foremost care about what we have in our Susie Linux Enterprise product and there is no Barrios in there if Barrios was in there you would have a better chance but even then it's some kind of optional but Tar is always there so it's not because I like Tar more than something else it's just the native simplest thing that's there and when I implement I'm not so much interested in what backup tool is running it's my personal interest of RIA I'm more interested in the framework infrastructure and the first thing I do is I implement something for the simplest thing in this case it's Tar and network file system and when this is done then I can wait for users, customers who say oh I want to have this for Barrios and then I say no problem talk to the Barrios guys the infrastructure is now there and then we can both try to get there that for example if it's of interest for you that also Barrios can do multiple backups via RIA but you told before you use RIA only to get it's not something that you will implement in Barrios but I only like to explain why I'm not using Barrios I use just the simplest thing that is there the reason why I ask is because in many environments it's more difficult to many environments it's more difficult to get an NFS server integrated instead of having another service that communicates with this but ok in a test environment in a Linux company it might be here another way I want that's ok there is no founded reason it's just what happens the question Barrios guys ok the question has been about road map the PEM integration on the targets for 17.2 NDP improvements so for last storage systems we are already supported but then we will support additional features like directly connected tape libraries at your storage system so we can directly connect it database optimizations which improves performance quite by 100 times in some examples the short explanation is the database structure we use in Barrios is normalized that means we have well for each job it records which files have been backed up so to denormalize it we have one table called file and another table called file name to minimize the space it's a normal database normalization because a lot of file names will be the same no matter which client did back them up but if we are talking about back up jobs which back up let's say 500 million files and we have a customer request then it starts getting performance problems because you have to insert 500 million records in the file table and join that with the file name table and check which records already exist in the file name table and that costs a lot of time so the approach is and I have done some benchmarking on that to denormalize the file name table for the sake of we for sure need more storage space in the database but it's the problem is also when the database grows it does not scale well it takes longer and longer and longer the bigger the database is and with the denormalize approach where we directly store the file name in the file table which for sure then has repeated records of the same file name in the database but the insertion of the records is a lot faster and it scales it does not matter how big the database is it scales much better that's the change so what we are currently busy is we have some large customers who are paying us to improve the product and then it's often for large environments and we use bottlenecks they have found out there but also we have some other things like the Python stuff we are doing and your payment equation and ongoing enhancement in the web UI which also is related to the stuff which is in Barrios because the web UI uses the JSON data which it depends on the JSON data which it gets back from the director so that's also a lot of work what we have also seen is that a lot of people starting with the disk space back up and this space back up is handled in Barrios is coming from tapes you notice this in a lot of places and because it's quite confusing and needs some carefully configuration in the beginning we try to improve this also for the major version to make it also easier for new users to get used to Barrios and make it more efficient for the disk space back up so this is about the roadmap more questions what do we mean with capabilities as a Linux context of the files or what do we mean ACLs access control lists and the C groups no no it's not something I have seen something related to butter is it capabilities in copy on write maybe maybe we should repeat the question first well I will repeat the question first because otherwise it will be not recorded so the question was that with Rear there is some tests with capabilities that were with tar at least not taken in account in the recovery section and I have problems with the recovery of the system it's a very good question and that's something to take with us for the future roadmap I think because that's indeed not yet implemented or not complete as a Linux is by default we disable it but it can be left enabled and done with that but tar is indeed a bit old that's true and tar does not take new features in account there was some other backup solutions like Star but I don't think that Star is still doing that new stuff also really well I would suggest that you create an issue for it and at least if you say that with Star it works tell us about it tell how you fix it what the problem is so that we understand it very well what the problem is because it's a new request it's a feature request just make an issue for it that's the purpose of this session we have new features or requests any more I just want to an issue also with the capabilities and are thinking are thinking through the system to another system and are thinking can handle this but you need an extra parameter that I have never seen before so maybe it's the same with tar right then yeah so talking a bit about advice for us as well did your patient go from peanuts to medication separately well the question was what do we do with the duplication within real nothing you do probably just at least I doesn't but I will give the mic to him but we don't have the duplication yeah oh yeah I forgot it I implemented support for Borg backup I like something in between I often experience when there are questions about Ria they are not about Ria but about the various backup tools and their limitations or issues so if tar is unable to restore whatever it's called then it's not a bug in Ria it would be an issue in Ria as he told you need an option in tar and Ria does just not set this option for example a new tar version version 99 now supports minus minus this capabilities and everything else do it right and Ria does not set this option then it's an issue in Ria because Ria calls tar and if it can call it right then we can implement it in Ria but if tar cannot do it because it's just not there then we cannot implement it in Ria because you know so that's you must keep things separated okay so just to answer a question there is a possibility to do the duplication there is a project called Borg backup it's completely written in Python maybe you heard about it and Borg is basically integrated into Ria as one of the backup methods so if you would like to backup virtual machines there is a possibility to use Borg backup which is excellent in that duplication and can save a lot of space as at least from what I've tested yeah and by the other side I have to say we have only limited support for a DD application called base job so if you know you have a set of machines which are similar you can define a base job and the files backup in there are not backup for the other machines if only if there is a change to them so this is quite something to handle separately and yeah there is planning to do something about this but I'm not sure when we come to this and we have some there is a project also an external contribution to optimize or volume format that is better deduplicatable by some storage systems so we have to split the volumes and keep the real data content from metadata and things like this so there is some work but I don't see any release of this soon unfortunately and yes there is also the one guy who have done this for the third steeman there has been a contribution which tries to it's using SQLite somehow to store the backup data and which does some deduplication the issue is that Barrios handles the volume files on disk in the same manner than it does handle tape so it just writes the whole backup stream to the to the volume file on disk and that's not deduplication friendly if you use block-based deduplication but this plugin is a python plugin as far as I know and it's at least it's for the SD and he will he splits the data and make checks from this so he can do some deduplication with this yet it's also not integrated with Barrios and as it's using SQLite for this purpose maybe it will not get integrated in this form but still we are always happy if you see that people are enhancing our product from the DLM point of view we our main concert is to have a DLM that can be easily moved between DLM servers, between sites and I think with deduplication on one target site you must send the ghoul backup of all machines to the destination site and this is not the approach for us as the central manager for relax and recovery environment it's just to being easy to migrate machines recover machines, backup it and also doing changing hardware, moving physical to the center because your hardware has no maintenance now and you need to install again on a new hardware in the same architecture also moving from virtual machines and you have things in different centers moving the ghoul amount of data in a deduplication scenario could be a big deal to be easy to manage in this in this picture from the DLM point of view but anyway I think there are some file system that supports deduplication at file system level I think JFS and I don't know if JFS I think it has and maybe could be addressed with building formatting the file system with JFS on DLM site and that but not in our approach to be able to move data between DLM servers and being easy to do this it's just the appointment from this point of view any question more questions cool ok this was it for today for this death room of backup and disaster recovery I would like to thank all the presents speakers Cambario and John I said John could say snow you got it and also I would like to thank my question is is this repeatable for next year I think we already had people that would like to have come this year but won't be able couldn't come for some reason and like to come next year so if there is enough ok if you liked it then we could try to have another death room next year maybe with broader speakers not only Barrios not only Riera DLM but maybe also at least we have one further why not I tried to have other speakers also but they didn't respond so there was only one guy from Brazil that tried to come but he may not come from his university it was a pity maybe but he promised to be here next year we already filed a request for next year it's taped thank you for being here and I would say have a nice evening enjoy the evening and maybe see each other tomorrow because we are all here tomorrow again for another fantastic day of Foslem thank you very much goodbye