 I have a problem with our structure that on the remote server it needs to run through because the files are just on the main files in the developer to give the servers a remote. You won't have the exact copies of user ADs and plan transitions on these things. So for a full system backup you would probably need to replicate all the user ADs in the remote server. Most of the time when it's used it's used for backing up one person's files. But if you've got a full system backup you need some sort of user ID signal. You can go to a directory service or something like that. What I think is the latest in the box paper is that it provides a bigger system like a server system. On the end of the features it tries to be able to secure once a whole file is stored in the servers. It's sort of encrypted. You can use it on your own for a certain treatment. So you have to sign the new work using your backup store. You can sign quotas, softboats and hard quotas to limit what each client is able to backup. And it tries to use this quota efficiently. So it doesn't try to delete it or over it. It tries to keep all the copies for the master list deleted and they will be garbage collected at the time you need the backup storage. It's a box bigger backup. One of the guys next door asked if any of the backup stores recently handled ACLs in the end. He was having trouble with that. I said I'll ask when this will happen. I'm not sure if a box bigger is handling ACLs because I don't use ACLs. He was reduced to running scripts that backup the ACL stuff from out of the band of the whatever archive he was using. What department will hope you are handling ACLs is storing the ACLs in a plain text file directory. You want all the ACLs to be applied. And then you can restore them instantly by running setfrockerl or data tanks file. Just play with the file and get right there with the ACL. And this works with any backup storage. Thank you. I think that's something I could probably use there. Has anybody used Amazon S3 with the backups tool? So I know there's one company which perhaps it's in an arcing interface. I haven't been great enough to try it yet. Because of the operation storage of that. So is everybody else there? Is everybody else here using some backup solution? What can you tell us about? Because the arcing board here might hear a bit about the actual talk topic one. But maybe we can get a bit off each other so we can get a complete head loss. I use Bacular just on my personal systems. It's sort of alright. The reason I chose it is because it advertises itself as handling the media quite well when I started using it. Which it didn't actually, but it does reasonably now. It's a big pile of scripts. It seems to sort of work. It's allegedly supports other arcing codes. Of course we know how to use that stuff. I've never had any difficulties restoring some of the backups themselves. Someone's been a fan to deal with it. I've probably been a fan of it as well. I've not been on that recently. Otherwise I use arcing. It's here that I want to restore. I want to change your hardware. I want to change your hard drive. It's just like a form in the window. You get it right on your hard drive. It works really well. It helps you to ensure that you can see what you're on. You get it right on your hard drive. You get it right on your hard drive. You get it right on your hard drive. I would say that it's a big stage. That it's not networked. You have to write a script to back up on your hard drive. I'm not a big fan of it. I have a strong fan for it. It's quite fun. I have a lot of ideas and plans for it. I'm not sure. Otherwise I use my own hard drive system. I hope that it will be back up on the LED. I'm just giving you some time. I also have some great projects like this. What would be so new? I'm just gonna write it down. I don't know why it's using that much in the CPU part of it, because I think it's really the fact that it's taking too many CPUs power that it's unable to do better with everything. Because Dave's treatment of this is so possible, I'm not sure if you can do that. But that's back it up again doing that, does that? Back it up, yeah, yeah. Anyone else see that before? Well, no, it's just a diabetes issue. So nobody's here who says, oh, I have a great system and it works. That's all the things I want to do. But apart from the CPUs, it really does what I need to do. It can have it's schedule, it can run with auto loaders from Dave's. Can you come back up with us if this isn't necessary? So, yeah, it's really the system we're just having in all the sites right now. So, I found this backlist console a bit weird this a bit. Yeah, I could. A little bit of land either in place. User-inquins by accident rather than by design. Yeah, yeah, yeah. But where it works, it seems to work quite well. And the scheduling and something, you know, to the user interface, you do that to the company part of it. Only restoring their disk or manually running a job with it. The media management and so just checking and stuff. And then restoring the files. If you check the W's widgets. No, I don't see that. It has a graphical interface for restoring. I was at a Dallas Linux user group meeting and they had a guest speaker that'd written about 20 books. And he was speaking on Linux administrators. He'd just written a book. He said that there was a real shortage of people that could operate big data centers with hundreds of computers or something like that. He said most of the people were power Linux users. But knowing how to run something with all that many servers and so forth. And he mentioned one package that all of the big data centers use. And it slips them on mind right now, but I think it had something to do with Maryland. University of Maryland. I think that was the name. That was the name I was trying to come up with. And I've used that. It seemed a bit stuck in the 80s with everything's got to go to take. But you could stick stuff to this, but it was sort of like some people take your piece of disk. It's really a take and stuff like that. And it sort of works all right. It was a bit hairy to set up, but I think it's had a lot of testing. It's pretty stable. But it's five years in for a few years. It's very complicated though as well. I set it up at home. Yeah, I think it's like a killer. Yeah, and it's long to weird. It's very much a black box to me. And it seems to have weak support for scanning tapes. I've done them. I've done tapes on a hard disk as well. And it seems that a volume you back up has a fifth in a tape. Yeah, and if it doesn't, it really doesn't like it. I've done tapes back in the now, I can't restore them easily. So it's like a lot of work. Yeah. There are commercial tools. I know some of the bigger hosting providers that use these commercial tools. I've never had an experience of that. It does seem that there's a bit of a gap on enterprise scale stuff. But I suppose that's because of the backups on the rest of the XD. People don't like working on them. And certainly people have to back up their own home systems and develop the free tools to do that. But when it comes to enterprise levels, it's difficult to get a development environment which stresses it where it doesn't matter if you back up all the backups to all your customers. So I guess that's why there's sort of a gap in free enterprise-grade back up tools. Because most of the people who are running that environment is too risky to be developing something wrong and setting up a test environment which has been very, very expensive. So I'm not saying it'll never happen, but I think that's those circumstances that I'll give back a bit. That's been my experience. I mean, as you said, there's been any stuff that they have said that's sort of, they have said to great, in their comments, that I'll be just interested to hear about it. That doesn't even know. What I'd like to have seen as part of the session is about meant to be about back up PC. Have we actually got anybody in the room that uses back up PC or knows about back up PC? I installed it on my workstation at home. I don't use it anymore for something this cool. Why is that? I found a lot of response back in the way I mentioned earlier. It seems to be quite nice for you both to get an understanding of the PHP from where you can add hosts and say what to back up where and everything. Have you tried doing a restore with it? I find that half the products, I think half becomes nice and easy to do. It's crunch time. It's quite easy because you can, if I remember correctly, you can restore from the web interface so you can say if clients have the possibility to log in on this back up PC, they can run in the face and say please get me that file in the back up storage. I deployed it about four or five years ago on a mixed Linux and Windows Office environment and at that the web interface was a really cool feature because it kept the user down. It says that means here. But also I think you've got lots of similar systems to back up. It's really cool in the way that it checks some files so you can back up on Office for the PCs and use maybe two PCs worth of storage to do it. I think about 20 PCs which were mixture of dead end and Windows boxes probably took about, it squashed it, the squashing you've got in the combination of compression and checks how many it is, just probably about 20 or 30 to one. So it's very storage efficient. I'm talking about a sort of four-year-old version so I don't know how it's progressed since. It coped quite well with machines which were on intermittently as well. I mean, we've noticed it would prefer, you could say, I'd prefer probably not to bang this machine up at night but if it's been off for the last week then I'm going back up during the day if it was idle, which was quite a nice feature which was given for workstations which a lot of, practically all the other back up stuff I've used is totally in a service so there seem to be on 24-7 and they just break the power assumption it's not true. Another back up PC who is this on my own on this side? Somebody, one of my office mates was playing for the phone. I had been told on my list of things to play with one of the things I heard about it that made it sound more attractive apart from the fact that you'd have to do crazy hacking stuff to back up to a hard drive was it's got single instance stored ability so rather than backing up to say file all the time you know you'd do fall back up to all the time and the file hasn't changed or stored once on this it has like a file with hard links or something like that apparently so that sounded like a bit more efficient way of using the hard drive as a backup store. I have a question does anybody is aware of a product which is... I have to say that all those files from Derbyn actually match the MD file file of the file so I don't need to back it up to back it up. Yeah, back up. The next question was does it have some future to actually upload some new files to an FTP server? No, I don't do that. Actually I came up with three years ago, I came up with a very large script today in which I get all those MD file files and then make a div from the previous backup and upload and include all that with GBGP and upload FTP server so actually I can turn it every day and upload it to an FTP server by the file that provides me so I don't know back up to see or do that. Any software that would be better enough to have backup on FTP server? I would imagine that that's something that you could extend back up PC to do if it doesn't do it already because it does have this concept of trying to only store files once and noticing what the differences are so it's something that if it really hasn't already done it it wouldn't be a huge amount of work I think to add it but I don't know but it's my guess, the first guess. Once this software is written in I can't remember how I see the backup PC it looks like it's either written in Perth or extendable in Perth I think it is in Perth, yeah. It then looks like it's using some SMB to connect and talk to the windows machines and to our sync to talk to the unix machines generally connects to everything and looks good. Questions for each other? I have some questions about not software but configuration our often we should do some backup because they see that the machine at most frequently look up and we have some problems or we have some problems to check the backup that you will be able to do this side of common I have also we always have to stay to the try to it doesn't always quite work out probably the other sets up but I try and stick to the the only prong to backup's failure of the name I'm never quite organized enough for that I do other people actually do and so if I found something I can actually stick to to the machine. You can set that up to take also a man that does this you can say I want the name I want the full backup every end of days and I want incremental between that and I think it's up to do it on two levels as well so because obviously the more incrementals you're taking the more single poise of failure you've got in the chain so if you lose one bit of video then will be your latter incrementals so you're missing some it can take a full backup and then incrementals and then you can carry on doing incrementals if you want or if you want you can then do another big incremental based on the last full backup and start doing smaller incrementals on that we've seen quite a nice sort of way to reduce the dependency to the incremental chain feel like backlit does that I think and I think a man does that as well but I do backlit although I did find it a bit of a pain to set up and get the DVD writing working while it works all it does is basically when the DVD fills up it sets me in balance so as you need to put in the DVD and write and label it and after two or three days I generally actually get around to doing that but most of the time yeah it's a good piece to buy then it manages, it takes care of all the scheduling and it manages all the labels and when it decides it needs to reuse stuff then it'll say I'll put this old DVD back in out of Lancet because according to the rules it'll give me that data now enough you don't care about it anymore and it does take care of all the good keeping to do with backups in quite a nice way but I've done it only a handful of restores you need to get and it's yet to prove itself in a total disaster so as the caveat I'll put around that but I'll resume it if other people have that's something good to be said for Amanda for what it's crafting as it is it's most improved solution in terms of just number of years of testing that there's a lot of these European things like back up DC and back to ARR they're not as stable as they are, that's just the feel again from the problem any sort of empirical measurements it's just the feel of the code and the documentation how do you feel about something that uses CPIO that's going to be a method of putting stuff I'll put it into a quick get back it doesn't really matter CPIO is well tested as long as I don't have the time to time CPIO commands all the time I don't think it really matters what it uses as a slow level so all are here basically CPIO all the time so that's certainly what's tested and it's been around for years and years and years as well for that type well some of them would use TAR some of them would use RC some of them would use CPIO what would be the advantages the use cases for each one of them really last things I'm going to give you a copy of the box I'm not going to give you a container because it's half wall or a CPIO archive you can get the equivalent of this it's like that so a single file contains a bunch of files and a metadata of those files in the audience if you're trying to get back up on a remote system then TAR is a truly good option because you need to copy all the files all the time whereas with RC you only copy changes so it really depends on your situation if you say I need to have a mode version I need to be able to go back to the snapshot of three months ago then RC is going to be a good choice unless you need to snap the remote system because RC also removes changes that you have to but it usually removes changes in some file so that you use what you have yesterday so it really depends on what you need to get back to the end so RC is kind of out there so it depends a lot of it depends on how much your system has changed if you're taking remote backups how slow and expensive your network bandwidth is it's really you've got to decide on a case-by-case basis what's the most appropriate tool I use Spectr for myself and I use RC to take remote backups on my clusters the whole field of customer where I actually need a combination of RC RC call systems to a central backup system which then target to take you can actually build what I'm working we're using different types of backup solutions we're using Tor RC setmanda MySQL database Snapshots we're using really a bunch of different data types a bit too much for the types but it works really well because we're both using local backups and remote backups for different things How do you want to go on the large mail store if there's a lot of files for me if we do an album Snapshots it is a tool that can really do a deal for companies sorry because there are quite a lot of files for example RC it just fits a lot of memory it's very long somebody had it in 10 years that is a problem with RC if you're doing very big file sets it builds a list of files around it takes full weight to remember when you build a lot of files it's not convenient but RC has some something like R doesn't it just walks the file system and makes up as it goes so you don't have any increase so you don't get a nice auto incremental stuff that you get with RC and that has been a flaw of RC I don't see you can do it's technically possible to walk a file system while you're doing an RC it's just that that's not how this will work that might be but I don't think it is but you can certainly build an RC type tool that walks the file system that goes it's just that I don't think one exists the only thing I've had to do when I'm in that form of RC is just to back up parts to create a type so you're using this so each RC process doesn't run out right now we just make a full copy that's one month and just configure to assume that the new mail will push it to a separate store if you tried using good news files to an incremental format if you tried using has anybody got much experience using that? I've used that the system I described really when it does actually is it runs a file just and you file it as a later date that makes an incremental run and if you want to do a full run again you just need that file and it writes a file at the beginning so it can happen that you have two incremental runs if your run is very long and files are changing between so two incremental runs can actually have the same file back though but that's just a minor number of files that you have the only problem you have with Dara is if you want to restore files you don't have to but it makes it at the beginning or it thinks that if you really need to search the entire archive so that it can take a while might have some other issues but what about some other issues and I have a task of online for example partly I can make a web short of the blog device but the problem is memory then has a freeze so you can pause the access for a little and then you can save it but if you already have the same function and it does shoot down the instance then you need to re-upload it later maybe someone uses them but it's just Excel it can pause the instance and then you can use save and then restore but you need to run then restore it later and then full procedure to explain it but I didn't found something that can just normally don't down the instance right no we only think of it it's like a snapshot of the problems then and that's there may be some transactions the problem is that some of our customers are using mysams storage it can be broken when don't snapshot because mysams can't finish writing to mysams storage if it works in a different way just to recover the first transaction is finished I can just get it broken if you can just tell I guess you're going to go with 13 and the mysams storage do you want to go with 5? I assume that's not the case with mysams I heard some way of synchronising mysams when you did an LBN snapshot whether that was a commercial extension or what I'm not sure but the answer is to synchronise your LBN snapshot with your database so the database has to be missing in getting it's files in a resistance state for the instance you're taking the snapshot so you could do the same thing when you're dead with Zen with all this social things that's burnt I don't know whether there's any on the shelf tools to do that I'd better talk to Zen people so there's some way to dump them just to remember because there are some way but it's also done this eventually you can do Zen live vibration from machine to machine at that so if you can even do that then I'm sure you can snapshot to this anyway we are running with replication we have some machines replication database we have allium sensors and we have mysams test and we've seen that the data and the database it's great so you're not missing nothing do you want to call that? anyway I wonder why I know the way to dump them because it's what we call it just freezes the instance so it's not making any variation time making a snapshot dump memory it shouldn't be on it's only on freezes systems I guess I want DCP connections just 10 depending on the memory well the Zen live migration does build up the copy incrementally so it uses page tables to do you use some way of copying the right well it uses the page tables to keep track of which pages are 30 since we did the last thing so it copies more and more and then when it's decided it's got 99% it freezes the running instance takes another look at the page tables and sees what's now 30 syncs that to the new machine and then kicks it off the new machine first obviously a demo they did it was something like a third of a second to do the to one switch over so it was a very impressive demo so are you going to do that from machine to machine there just doing a snapshot of this? It would be good to cover itself on the song of a decent player or just a great cook in arms without much downtime probably to the rest of the message that wouldn't be a good idea I mean if you can't do that then I would have thought it's probably not the big job that I'd have to set because the code to do live migration is already easy yes they have the system so it's a great sort of I'm not going to say the difference I've got a question to Charlie I'll throw another one in the link about a different part of it quite a bit of discussion quite a very interesting point is about the backup side of the thing another part of this session was supposed to be about the installation it's not something I've really gone into very recently especially not since the demo installer changes is anybody actually using the demo installer to do an automated installation yet what's working for you what's not working who's using fly what's not working is anybody doing an automated installation yes we're using pre-seeded DI extensively at google internally for the corporate stuff we don't use FAI we use something called Slack which I'll upload to the probably this week's upload it's is similar to FAI in that you can have a couple of Slack roles which are kind of like in my FAI classes but it's very simple it just drops files onto a machine so we'll have a Slack role so you are an SMTB server for example that will drop the relevant configuration files onto that machine and it has a pre-installed script and a post-installed script and that will do any extra package installation and that sort of stuff so we just have a is what the pre-seeded DI spins out and then we apply whatever Slack roles we need to get up to whatever it is for them so if a machine dies and needs to be re-installed for their battle please do an automated installation on the site again and is that the sort of thing that the user gets to be on their desk would kick off for themselves or is it a bit more we don't use it on the desktop we use it on the servers more but you know getting a user that needs to be a DNS server for example could just build the instructions and be install it Slack would be a particular Slack role you know Workstations we also do automated installs on we just want to see a engine for that so as well so the base install will come out but there will be a bunch of see a engine to start with that so someone else on the other side using can be installable pre-seeding or something I've been playing with it I have had some problems getting the partitions team because I want to make a lot of LVM partitions a different type of file system that is free on the SNPFs I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I