 Welkom in deze afternoon session. Mijn talk will be over Linux disaster recovery as a service. Welkom. Mijn naam is Krasian Daz, I'm one of the maintainers of the relax and recover program. But this talk will begin with a broad perspective talking about disaster recovery and disaster recovery planning etc. Let's start with it. The agenda. So the question sometimes is do disasters really happen? We will soon know why. We will talk about planning, about the software that is available on the market. How to do it in practice, how to exercise it. Then we will discuss in particular the open source project to relax and recover. It's one of the open source projects that are related to disaster recovery. And as last topic I want to spend some time describing what disaster recovery as a service could mean. How to build it up from ground. The question is do disasters really happen? These are pictures from disasters that are only less than one year old. And fire can happen. This year we had a lot of snowfall, roofs collapsed. So there is a picture from Belgium somewhere. You can have an influte water breakage. Computer rooms can go under water etc. Earthquakes is maybe less important in Belgium. But you never know of course. But seriously, the most critical disaster that can happen is a burn out of your computer. Heart is that fails. You could protect by a good fan. This is a silly picture. To bring some joy in the talk. To give you some small basics, if we talk about disaster recovery. Disaster recovery affects a small piece of a complete disaster. Disaster recovery is about how to restore a function. For example a computer to a ready state again. So it is working as before. In contrary, you have also business continuity. That's the process. The process of the business. How will the business react when a disaster happens? Until that business is restored as usual, as normally. So there is a complete different broad spectrum about disaster recovery restoration effect. In this talk I will concentrate on the first part disaster recovery of computer data centers etc. Albeit the second part would be a very interesting talk. But that will be for another time I believe. Disaster recovery. De vraag is altijd. Will it happen? Yes of course. Most of the people here in the room already had the disaster. More or less computer breaks. You lose your computer for example. You lose important data. These things happen. Unfortunately we are not always prepared. The important lesson if you go out here is be prepared. And react timely on a disaster. So before you can react timely you need a plan. That's very important. So don't hesitate. If you have a disaster you have to know what to do immediately. So I got often the question. I have a good backup policy. Excellent. You have to have that. Because it's very important to restore data. But in case of a disaster backups are not always enough. Because you can restore your data. Of course. But if you lose your computer. You have to restore your computer. New heart is for example. You have to reload your OS. You have to find units. Reinstall your backup software. That takes a lot of time. In case of disaster you don't have that much of time. Because you can lose so much time. Days, sometimes weeks. I even had it very long time ago. That even after months a lot of peaks came up. Because they were from a disaster. Because nobody knows how it was configured after months anymore. So it's very important to foresee these kind of things. And it can help you with a kind of inventory of your hardware and software. It's very important to have it. There is software available. I will speak about it a bit later. So the plan. Disaster recovery plan. I will stress the word plan. It's very important. Before a disaster you can act on a disaster. You need to plan everything. You have to find the money. You have to convince your managers of a decent plan. You have to exercise a plan. So you need to have extra hardware available. Or you hire it. It depends on the amount of value you get from the managers. And it will guarantee that if you exercise it. That you know what to do. Immediately. That's very important. And the last line. That's very low. You can see it. The less managers around you. In case of disaster the better. Because they always tend to influence you. To run in your way. It's very important. Decision makers get them out. And with a decent plan. That's the way to do it. Very important. In a disaster recovery plan. The risk analysis. Before you can act on a plan. You have to know what can happen. And what are the factors in my computer room. Which are important. Because not all the operating systems. Not all the data. Not all the computers. At sets of functions of computers. Are equally important. So you have to make some categories. Which computers should be restored immediately. Like for example. And database server is most of the times. One of the important things to restore. Immediately when a disaster happens. Of you exercise a disaster. It's very important. You have to develop a plan of course. You have to time it. You have to write it down. You have to make an inventory. And therefore you have to go around in your computer room. You have to know the labelings. The operators have to be trained. So there are a lot of checklists to be made. And a lot of meetings to be done. With a lot of people. But the important point is. You have to convince your managers. Your directors, your board of directors. That you need a budget. A large budget. Hopefully to give you the time to exercise everything. Because it takes a lot of time. To make a decent inventory. And to test it. Stuff like that. And very important is also the last line here. To make a cover plan. Because once you made it. That's not enough. Because your computer room is continuously alive. And changing. So there are new computers going out. Other others going out. And new things is coming. So you have to redo the exercise. Or the thinking about the planning. At least at the yearly basis. Also the testing. Very important. To do it not all the computers. To memorize it. And just test it. Of course. It's always better to prepare a disaster. To avoid the defect. To keep it simple. Simple things is. Having good mirroring. A good backup of course. Test your restores of backups. Very important. To make an inventory. This in the open source world. A very good program available. Dat makes a kind of hardware. Inventory of your system. But also a software inventory. And keeps a listing of all your. Comfieraces effect. And you can put it aside. And of course. When you are ready to start with. With that you have to find a good decent program. To do the bare metal restores. There are quite a bunch available. I give you some overview. Of what is available. Now. Do we go for commercial or open source? Ja, I know. Here we are all convinced that open source is the way to go. We too. And luckily. There are a lot of big companies. Huge companies that are realizing that. Open source. Is not bad. I remember 20 years ago it was a disaster. To convince the managers. That open source was better than closed source. But the mindset is changing. In a good way. Open source does get a chance. Also in the disaster recovery environments. That's a good point. But anyway, whatever you choose. If it's commercial or open source. You have to test it. Because not every program does fit your needs. Or maybe you don't like it anyway. So the testing is very important. Never test tonical. And prediction server of course. Also a test server. So what are the solutions. Available in the open source world. Even in commercial I don't talk about. A lot of categories. In the open source world. You have disaster recovery modules. That are available in backup solutions. To name one is Bekala. An open source backup solution. A very good open source backup solution. But I'm not. To found. Of having a disaster recovery module. That you get. An extra thing. Het eindfocus is een backup en je kan alleen goed op een punt backup of disaster recovery. Dus je bent heel links met de backlight zelf en dat is niet altijd de beste opportuniteit. Maar oké, als je het leuk vindt, oké. Een andere mogelijkheid is de cloning. De cloning is leuk om nieuwe systemen te maken, maar het is niet de beste manier om in disaster recovery te doen. Een snap shot is altijd oud en het geeft niet de flexibiliteit die je sometimes nodig hebt op nieuwe, een beetje verschillende hardware. Dat is het probleem met de cloning system. Het is niet altijd 100% gelijk en dan werkt het niet. Ik heb het al al een paar keer geleden. Maar het is oké, het is goed voor één purpose, maar niet voor disaster recovery in mijn opinie. Maar oké, je hoeft niet te geloven met mij natuurlijk. Test het zelf. En dan heb je de true open source disaster recovery software. Deur main focus is disaster recovery, luckily. En niet op backup. Maar oké, je kunt backup doen met het, maar het is niet echt een backup solution. Het is een disaster recovery solution die ook backups geeft. Maar oké. En de meeste focus is om je een heel snel environment, user interface, niet een heel fijnse user interface. Dat is niet belangrijk als je in disaster recovery mode, je hebt alleen een laptop of een klein terminal, soms alleen een prompt. Dus wat is er met een fijnse user interface? In dat moment. Je hoeft command lines, je hoeft iets te typen, heel snel en het moet acten. Oké, in deze categorie hebben we 3 open source disaster recovery software in de markt. We hebben Mondo Rescue, dat is van 2000. We hebben MKC Direct, makes the drum recovery, en we hebben Relax and Recover. Ik kan alleen iets zeggen over MKC Direct, want ik heb het ook opgemaakt. Dus het is ook mijn stuk van software, het is de predissessor van Relax and Recover. Het is een geweldige programma, maar het is zo monolithisch. Het is niet flexibel genoeg om te reacten op chainen. En je moet chainen ook in disaster recovery mode. Daarom hebben we in fact re-rollen MKC Direct in een nieuwe programma, Relax and Recover. MKC Direct is nog steeds existent, en de userbase is nog steeds heel groot. Ik probeer mensen te bevinden om Relax and Recover te gaan, maar soms is het moeilijk, omdat ze het zo leuk vinden. En je hebt Mondo Rescue. Dat is ook een goede programma, maar ik wil het niet zeggen, want ik heb het niet gebruikt. Oké, in disaster recovery, de midden of de data is heel belangrijk. Never store your data of your disaster recovery on your local computer. Always use external data storage. Very important. Whatever the bootable medium is, it's not that important. You have to boot your new system via an ISO image, CD, or whatever, via the network, pixie booting, via USB, via a tape, with one button disaster recovery. Everything is possible, and you can mix the two things. You have a boot medium, and you have your data storage, where your backups are stored on. It can be on the same media, it can be different. You can have a LAN-only solution that you boot from a central boot server, pixie server, and that infect your archivals, or your data storage, in a different system, perfectly possible. In a different environment, where you don't have a fancy Nest server, you can have it on a tape, an external tape drive, you can boot from the tape, you can have it on a USB disk. These days, the USB can be a total solution, you can boot from the USB, and you have the data also on it, because in USB you can remove and store it in a fold somewhere, and if it is possible, you can do it over the network, for example on firewall, you don't have a lot of rules, you only need the secure cell R-sync rule, in fact to go through your internal data center. It's all possible, but you have to configure it, of course. How does disaster recovery work in practice? First of all, you have to gather the system information. So the program that you use will collect all your data, network information, boot information, your disk information, all kind of information has to be gathered. It has to be stored on the central place. Also the disk layout, LVM rate stuff, if you're using grip lilo or I lilo, for itanium-based systems. You have to make a system backup from that particular time, and also the user data, but not necessarily. And you have to make a bootable image effect. Because in the case of disaster recovery you have to boot from that image that you created. That image can be stored on a CD, on a network, tape, USB, whatever. And all these steps are done online. That's a very nice thing of it. So you can have a production server, you can just launch it via Crown, or another queuing system, daily, weekly, whatever, and you have it online available. That is a practice for making the rescue image. Now, if you are in a disaster recovery mode, you need that rescue image to boot from. So you boot your system and you have a toolbox. Here I'm describing that for the rescue image you need also the journals, and the device drivers, the network configurations, stuff like that. And it's all done in RAM, so you need the environment. Now we come to the recovery phase. In the recovery phase you need to boot from the rescue image. You have to restore your disk layout. First of all, you have to recreate your partitions. The file systems that were originally created. You have to mount it, and then once mounted you have to restore the data from your data store, whatever it was. When that is done, you restore the bootloaders and in fact that's it. Once the bootloaders are there, you can do a manual inspection, inspection or not, depends, and you can just reboot your system and that's finished. That's in fact the simple steps in disaster recovery, making an image and then restoring the image back on the new hardware or a new virtual partition doesn't matter. And just boot it. And it should work. Let's talk a little bit about relax and recover. Relax and recover was created in 2006 together with my colleague, it's a German colleague, Schlomer Shapiro. He was a very fan of MKC Direct and we decided together to rewrite it from scratch. He also has some open source projects and together we rewrote it in fact in a two months time or so. We have the first release available. So it was an incredible solution and the way it was because it was completely written in a modular way. But I come back to that later. Relax and recover has come from far away from 2006 where we have no users effect until now that we have a very broad user environment going from big companies even in Belgium. We don't know if Jeroen en Dakar are available. Dakar is not here, he's sick. He's sick, okay. We can say that the federal police is going the open source way and they use Relax and Recovery for the disaster recovery feature. So, thank you federal police. But not only the federal police is using it, the German government. A lot of German governments are using it and also quite a lot of commercial companies. Even very big ones that are using for the global disaster recovery policy. And important is it's available since beginning last year in Fedora. So you can just do human install Rear. And also it is shipping with SLS 11, SP2, SP3. It is in the image over there. And it integrates, it's very important. In this system, you can integrate very easily external backup software. So there are already a few external backup software linked into Rear so that your backup is effect stored with, for example, Tivoli or data protector of NetBackup. Bakula is also available. So you don't care about Rear with your backup, it's done by really good backup software that you trust because the companies want their backup software because the operators are trained for it. So it's a very good selling point that's your disaster recovery system integrates very well with your existing backup software. But of course we are in an open source environment and we, first of all we delivered Genoetar as the main default backup solution. Our sync is another possibility and now we have since the new release 1.9 Bakula also as an open source solution and it scales very well that already told you that. Ok, a bit of history, just mentioned already, it is effect the spin-off of two open source projects and it was released and it was even in 3 weeks time, not 1 month or 2 months. Ok, what are the features? Relax the recovery of Rear which I will mention Rear in the future is focused on disaster recovery only. Backup is important but not the main focus. We are not really interested in doing, let's say incremental backups we try to do a complete full backup as the main integration. So it's effect a simple full backup integration. It complements also the backup software of course because backup is very good in incrementals and give you a long time data storage but backup software is not really intended to do disaster recovery. So the two software pieces that fit together very well and that effect is what we try to propose or what is our methodology is say use the best tool for the job. So I already mentioned that external backup software or commercial backup software is supported and included today. Even last week we had a new release 1.9, we are very excited with it because a lot of bug fixes solved and also a lot of new features were added. But I will talk about that a bit later. So the integration is very transparent and other backup solutions are still available and still are missing. But for commercial backup solutions we are working with a sponsoring module. So we don't write it just for the fund we can add it if you like to pay for this. And very important in my opinion that Rear is integrating very well in the network so you can have a complete network only solution. You don't have to fiddle with tapes or CD's or other external hardware you can do it but you don't have to do it. A fundamental important point if you want to make a service around it you have to make a service you need a good network. Not everyone has a good network but okay for the big companies who can afford it they do have it. A good network also means a good solution so that the NAS or SAN is available so you can have a very quick storage or retrieval from your storage from your environment. But it's also important that the companies if we deliver the software its open source of course that they can make their own branded RPMs or packages for them what doesn't matter so they can tweak it a little bit and they can repackage it and just distribute it to all their computers. I think that's an advantage and the scheduling of course that's done by their scheduling team of whatever. So the development of Rear is an open source base model and we are using Sourceforce to store our data and you can use Subversion to get the snapshots available but the snapshots are also available on the internet. You can have a daily update of our builds development is done based on sponsoring or also done by the community we have some already told you we have two main developers but there are quite some active developers in the community two guys from the Federal Police for example, they are very active the last months working very close to us with us to make new features available which I am very grateful and that's the strong point of open source it is you have a community you get feedback from the community and you get very good ideas and together with it you can have a discussion and you can work to a good solution because sometimes we get patches and we are very grateful that we get patches because sometimes it is very weak and then we have the community and we can improve the patches very quickly and that is very nice I like it very much that framework why is that framework so important because Rear was built in the back of our heads we knew about the faults we made with MKC Direct it was very monolithic it was a very strong program and still very strong not only I we are able in fact to do a very good code design, code development, code patching that is not a very good module effect the module that we now have is a framework you can just plug in modules and it is a very easy module it is written in BAS we do not have C coding it is just a BAS script very small, sometimes only 10 lines long to do a job so it is easy to debug and you can just plug it in I explain it a bit later documentation is online just to release 1.9 of Rear and also the release notes the presentation will also be on our website so you can download it from the presentation section I will do it this evening so the architecture of Rear I explain it in a modular way if you do a command RearDump you will get an overview like on the left side of the pictures of course that gives you what architecture are you running on what is the OS vendor which version of the vendor you are using and you have a configuration tree on the left side you see the configs which are available most of the configs are a bit made by us and best practice but there are 2 configuration files that are very interesting for you it is called the site.conf and the local.conf the site.conf effect is that you can put some variables for your site if you are working to the same pixel server you can define it over there for your local computer you can just use the local.conf and these are stored in the slash etc slash Rear directory so how do you use it all the scripts are stored under the user share Rear subdirectories there you find the complete directory structure and we have a list of methods a method to make and rescue image a method to restore it so it's the same data your Rear executables well scripts are also on the rescue image and also the configuration files so each configuration file that you modify on the local computer is also on the rescue image so it's copied the fact so you have the exact same definitions and the exact same programs available and you can make backup or make backup only is available and the recover important if you want to restore and also you can plug in models like I said for example the config to html is already available not as a model but if you install for example config to html you can just enable it in Rear with a variable and then it will automatically detect it and run it the output of config to html in the subdirectories of far slash lip Rear effect ok already explain the site and the local configuration files you can see some examples an example output output could be an ISO could be tape could be one better disaster recovery for example you have also backup definitions netFS netFS is e.g. using NFS server, SIF server also USB is a netFS solution we tend to use a lot of the netFS solution netFS goes together with the netFS URL and there are some other options if you need for example user password credentials you can have some options over there these options you don't have to they are hard of course they are described in a concept guide you can find it on the website and we also have a new file in the documentation directory it gives you an overview of the possible configurations that you can have to start with because I know it's a bit confusing in begin but in fact it's very easy once you understand how it works it's very simple there's a simple output any option it will give you some help and a list of commands which are available to type always handy to start with is the dump command which you saw in a few slides before that gives you an overview of which is your current OS and then the second option that you can start with is make rescue that makes and rescue image without backup if you say just make rescue you need to test it you need to know if my image boots on my system or another system before you make backup and recover don't worry there are a few papers already on the internet explaining the commands and how to use it in different scenarios the backup or rescue method that's almost the same there's only one difference it makes also a backup and the rescue is not for the rest is exactly the same it has some phases phases already discussed a bit in the disaster recovery analyzing you have a preparation phase you analyze your system and it's all stored under the varlip rear recovery sub directory so there you find your data on your local system that directory is also copied on the rescue image so you can always look in your local system to see what the latest status is of your configuration files and then there's different phases it's not that important that I explain it but you can find it in the constant guide a more detailed explanation of it an example if you use the flag minus s is a simulation it's very interesting it gives you an overview of all little subscripts they are included in a certain method or phase it's not readable but okay if you install it and just try it once it doesn't harm anything it doesn't do anything at all it just shows you which commands methods effect and which scripts will be executed in which order the order is also very important so from the left to the right and you have a log file a stored under the slash dmp it's called rear minus hostname.log very interesting to look at not much output is given but it's very good if you have errors you will see them there the recovery phase well it's the opposite you have to recover your system these are the steps that you are required it's the verification it will verify if your rescue image is capable of doing a recovery if you only need to make rescue and don't have a backup and the method says for example baccala but you didn't define any baccala environment of course nothing can be restored and it will complain about it very important to know is if you decide a method to use for example a backup and an output the recovery phase will use exactly the same one because I explained that the make rescue, make backup and the recover are using the same set of configuration files don't expect to do something else it's important to know of you have to foresee it nothing is impossible but that's part of the planning also here you can do a minus s you see here it is a very small output because it was a request restore that is a very simple method that will just do its job and then wait and tell you I am ready for the restoration and then you have to do another kind of thing to restore your data for example in rsync or something else and then it waits for you and when you are done you just click return and it does the rest of its job to keep it away of doing disaster recovery but it is available in some cases it could be interesting already explained the config to html cannot stress enough it's quite important if you activate it you will see a line coming up the red line over there and you will have some extra lines or text files available under the subdirectory bar lip rear recovery in case of disaster what hardware data had what operator systems which configuration files are available, how are the settings etc again the log file that is an example what is the status the status already explained it is very mature on intel and 46 based architecture it is also working on ethereum but it is less tested it is working but the more users will using it more feedback we will get of course it is released as an rpm tar in depth and it ships with open suzi in fedora and well support is available via source forts and we are open for patch submissions of course the more the better the current development already mentioned 1.9 has just been released it is the basic steps for starting with cloning it is still in beta it does work but you cannot rely 100% on it because we still need to tune things so in the next release it will be much better but it is already usable for migration from physical to virtual vice versa whatever and that work quite well the one button disaster recovery method is available don't think much people using it still ok it is there and my point is the DHCP client is available and you need that infrastructure for having a service afterwards so the network again and there is a toolkit available for doing other stuff than just recovery but to do some inspection of your system what is still missing is the central point of storage we had in center point of storage of your data what is still missing is the central point of your rescue images for example or your configuration files because you do it locally you put it on the USB or on the tape or CD over your copy manually to a central point these kind of things could be automated and that is a fact what we are working now as a future step is called disaster recovery as a service collect also the rescue images configuration files to a central point so that we get a service around it and that point software we call rear server so in the first phase in fact we are collecting the information and making a web based interface around it so that you can just make a listing of servers that you did in disaster recovery did it work what were the failures for example you can restart it these kind of actions should be possible in the first phase you can group it by department, by host, OS stuff like that that will be the planning for the coming months and the plans are now after six months predicting in getting very real so you may expect that within the six months the first release of rear server will be released what are the requirements it should go for example through firewall other boundaries network boundaries it should be protocol independent so we don't want to have fancy protocols used we only want to use the standard available protocols on the systems so we don't want to install too much other stuff except the rear server perhaps but even the rear server on the side of the client wouldn't be any additional software it should be just a new rear version effect the rear server is only on the central part maybe a few changes are required in the rear software new variables you have to point to a rear server the sent results how do you want the data to be sent via mail or via HTTP for example and some other small things and maybe a bit of bunch of scripts to plug in in the right spot the rear server architecture could be like this it's not 100% defined we are still open for discussion and discusses are going on the mailing list on the developer's mailing list of rear so if you like it subscribe yourself to the rear mailing list of development and you can have a live discussion with us how do you want to see it what is the best approach because we have our own ideas or on these are maybe not always the best ideas the more interactive the race with the community the better and we can choose the best way to go forward so a central point could be a NAS server we could use MySQL or something else it's not 100% defined but a patchy PSP is probably very usable and will be used we could just post fix as a delivery point but we had moment discusses to use webdapp for example and using the HTTP or maybe even the SSL link for enhanced security very plausible discussions and we are really having good conversations around it so we are starting with it that is very good and very challenging for everybody so the design considerations a message a few of them but don't take it for granted that these will be the final ones we are still open for discussion I won't talk too much about it but ok things will use standard components like for example post fix like a patchy PHP, MySQL or another database and we don't want to use a demon so it should be something that you push in the kind push of course there have to be another way if you want to do a restore how do you do it do you do it via secure shell command execution of something else that's still not 100% defined but ok ok the web Huey interface that the case could be done by LDAP so apache can handle that for us square bros stuff like that this is only PHP scripts behind an apache server so that's not so difficult to create but it's interesting to have to have a central point that all your systems are available and listed ok the rear server standard software the roadmap and almost this is the last light this is the roadmap 1.5 the 1.0 hopefully will be released mid this year so that it is as a compliment for rear the rear server that we can have creating a service around disaster recovery which is very good very nice that have an extra functionality around disaster recovery and of course the next steps but let's go to the first step first if you want to contact us this is the website we are the maintainers do not hesitate to subscribe yourself to the mailing list en dat was it for my talk I'm open for questions please shoot should there one of the directories where all the scripts are in so there are all shell scripts and the question was can you show us a directory where all the scripts are available stored in on this PC I cannot show it because it's my disaster recovery PC my PC where everything was on did not respond on the cell written embassies cell written in shell scripts pure simple shell scripts you can find everything under the slash usr share rear sub directory and then you have a structure depending on the method you choose but it is explained quite well in the concept guide how the structure is aligned and how does it work and of course use the minus S simulation to show which directories are executed and which scripts in the directories yes please yes the question is is this Linux only on this moment yes it is we are open for others because this morning I took an CD from free PSD just to try it out but okay for the moment this is only Linux we have a boom 2 red cat we have fedora we have all the others please yes the question was is this also copied yes of course that's the basic of disaster recovery everything that is important to have a restoration of your system is copied the question is if you want to do a restoration of your partitioning on another hardware if you have to do it manually or not well that is the purpose of the new release that with the cloning P2V that the flexibility is built in the scripting that it will recognize other layouts that the flexibility is built in the scripting that it will recognize other layouts en if needed it will propose you a question yes please the question was I did not like cloning software well I didn't like it for disaster recovery purpose how do we solve it with Rear well the thing is Rear is made for disaster recovery so we had already in the back of our minds that not all hardware that we restore are equal so therefore we are using for example UDef to recognize different hardware systems other types of hardware and it will propose you a method to change for example an SCSI disk with a SAS disk or another type of disk it will propose you that that this a major step forward in the 1.9 that the cloning functionalities is more flexible in the previous releases it was much more strict and then you had to do some manual tweaks but we tried to avoid it because like we said in a disaster environment it has to be automatically as much as possible and these kind of things we can effect trigger and interact with it en we propose you a decent new style of underlying hardware for the network, for the storage yes what about virtual machines and virtual servers so you have the base system which you have to restore first and then to restore the very virtual disks anything integrating that the question was what about virtual systems within a basic ESX system for example for the moment if you are talking about the main system that carries the virtual systems it will restore the main system and the backups if you want it but it will not restore the virtual systems if they are running because that's another kind of status so if you want to virtual systems you also backup or do a disaster recovery of the virtual systems in the companies that I work we do it like that also the question was we restore the virtual machines with an ISO image we restore the virtual systems effect like we restore physical systems there's no difference in a method of booting method of restoring another question yes I believe the question was what about LVM in Debian Debian Debian is on our list of supported hardware and software so normally we have a bunch of test systems because I know disaster recovery is a pain to test always so we cannot test everything but normally by my knowledge Debian is working quite well for the moment we don't have tested everything of course but as far as we know that it is working quite well with Slesa, Fedora, RedHeads Ubuntu and Debian they are on most tested environments and there are others on the list other questions oh that's far what kind of send components do you support send the question was what kind of send components do we support any kind of send that is your Linux is supporting if the Linux is supporting the drivers we use them they will automatically be copied to the rescue image and they will be activated automatically so we are using for example in some companies the XP storage or EMC storage that's quite transparent another question do you just really need distribution specific configuration files do you need with every new distribution version do you need to update a set of file or do you simply have a strategy to copy everything or do you need to update some configuration file for every release and how much work the question is for every new release do we have to update local configuration files of your site configuration files no not really it's always backwards compatible so if you do an upgrade with RPM the local configuration file is well saved i mean does real need to have knowledge about fedora operating system do you have special configuration system if you do a new version of the operating system do you need to prepare a special release specific version of your file the question was with a new release for example if fedora does real also needs new versions of it to understand the new version yes that's true for example to give a very simple this is the last question by the way a simple explanation of it we have upstart to boot your systems we have this sys5 init environment with the new fedora which is still not available the 15 we will using systemd systemd is not yet supported it's on my list for the coming weeks but for the 1.9 release i decided to not support it yet because i did not have the time to roll it out and to test everything again so it was decided to make a newer version within a couple of weeks that supports the new boot environments i will do it effect for every new stuff available a lot of patches are coming from that thank you for your time thank you for your feedback and hopefully enjoy relaxed recover thank you