 Hi everyone, I'm known as Ram and I work with ThoughtWorks. I have done a variety of things in my professional life. A big variety of hardware and software and network exposure. And this talk is based on my experiences at ThoughtWorks. When I was a sys admin, now I'm going to become a build engineer for a very interesting thing that we are doing. But this is something that we have found is very useful and important to us. And we are actually exchanging this experiment throughout the organization. So my talk is, it's actually aimed at all of you who may or may not have played with hardware and may or may not have even thought about this before. But it's very rewarding, it teaches you a lot and you just might save some money for certain use cases. So just so that we all get some terminology, we are on the same page. Here is what the term SAN usually means to a lot of people, especially to those who signed the check. This is how SAN devices are pitched to people who want high fidelity storage. We are all used to what we do on the desktop. You have a laptop, there is a hard disk drive and yes, we all know you should make a backup. And if you lost your disk one fine day, it got dropped or stolen or whatever. You will just cry a bit and say, oh, I lost my data and my life moves on. But there are enough use cases where life actually will not just move on. You have to have very good quality storage and it has to be really high fidelity. That means if you say that please write this block of data, it should have written to you that same block of data. So high fidelity storage is something very important. There is also a related term called NAS. Yeah, please join. There is also a related term called NAS. So you have SAN, which is storage in a network. You have NAS, which is network attached storage. As of today, the lines are very blurred and you know, once people start using these things, these small demarcations don't really make a big difference. A NAS is very much like a network file share. You know, you have a file share and the people on the Unix or Windows boxes just connect to them over the network store files, get files, stuff that way. We'll set aside these differences. We'll focus on, you know, how you can get your own storage works. So this is what a SAN looks like. We have purchased this device at my company. It cost us $68,000. And we purchased this for a certain set of reasons. There are also SANs, there are other SAN vendors. These are various of them. They all have their merits and demerits. In this case, I specifically purchased it because I needed certain analytics which were very important to us. So we are having a good time with that, of course. One thing to remember as we go along this discussion is the difference between a SAN NAS device and the usual desktop sharing that we do. It's very common to, let's say you're a small setup or you're just some random small team and you say, okay, look, I don't want to talk to the IT department. Let me just get a box, put together a few disks over there and dump in a whole lot of files onto it. This is happening on my own project, for example. And I went and told them that this is what I can do for you instead. So I'll give you an example. There is this team. I'm part of a very big team. One section of them, they have a lot of data. They have these files which are overall 800 gigs in size. And it's important for them because they analyze it and do a whole bunch of things. So what they said is all look, you know, if we go to the IT team, they're going to take their own time and they will say, we need a budget and you know, random things that way. So they said, let's just take a workstation, ask for one or two disk drives and say we will manage it. We are responsible for the lifetime of the data. And that's when I went and made them understand things like this, that you could do that, but just setting up an average Linux or a Windows box is not the same as having a good file system. You have the performance to take care of and you have cash in benefits that you would lose out on if you do not engineer a storage box. So what is this engineering that I just spoke about? So look at this, when you pay a lot of money for an enterprise sale, the one over here, okay? You get a whole lot of things. You get stuff like really, really redundant power supplies. For example, that device I showed you that has four power supplies. The average workstation, you know, they have just one or two. There is a lot of cash engineering that happens in these by at the time of read, at the time of write and in terms of accessing frequently accessed blocks and stuff. The average Linux Windows box, I'm just telling you Unix boxes or Windows boxes, you ask them for a block of data, they'll faithfully give you a block of data. You tell them, okay, look, I want so and so file, it will handle the file too. But these kind of devices, they do a whole lot of engineering over there. For example, these high-end sans, you have SSD drives fit in as write caches. That means when your application, you know, when you have a server on the network and the server tells us, hey, please write this one gig file for me. The sand responds that way. In fact, we've seen at experiments at work that it is faster for us to write to a sand on the network than sometimes to write to local disk. The reason is, in this case, you know, we do a lot by way of networking, the SSD drives, they receive the write data and then they offline they flush it to disk and read is also similarly to disk, data is cached in RAM, a lot of things happen that way. Right, and of course, battery backups. Now let's take a situation. Let's say we all have this situation, you know, there's some writes happening to a disk and suddenly you lose power. For whatever reason, you just lose power. Imagine if even the power supply is failed. Okay, a lot of these high-end sand devices, what they have is they have these disk controllers and your hard disk drives they plug into the disk controllers. The disk controllers have some of them they have a battery right on that. Some of them have separate capacitors, high-end capacitors with a lot of electrical charge in them and so on. Sorry about that. So, even if the whole device loses power, whatever is there in the RAM cache or whatever else, there is enough electrical charge within the sand device that even that will finally get written to disk. It is engineered that way. Okay, this is what you get with high-end sand, that's why people put in that kind of money. Now, if you want to build your own sand, you may or may not be able to do things like that, but there are things that you can get away with, which is what we discussed here. Okay, yeah, why would you build your own sand? Okay, I'll talk about a very real-world case just after these initial slides. So, the top thing is you have full control. Okay, usually what happens is in a sand you are usually a commercial sand or whatever you are usually restricted to whatever the GUI gives you or whatever the storage admin in your office chooses to give you and see, sands are expensive. That sand that I showed you for which we paid $16,000, I have configured it such that it gives me just 10 terabytes of storage. See, that's like around $6,000 per terabyte. $6,000, okay? The average hard disk drive of 1 terabyte today in the market comes at just 5 or 6,000 rupees. But this is $6,000 per terabyte. So, storage is very expensive, no matter what there is a lot of this, you know, misconception out there that, oh, storage is cheap. Yes, storage is cheap but there are those tiny inexpensive disks which you can just put in your laptop or your desktop computer at home. But when it has to be reliable storage, storage is not cheap. Okay, and you have to put in a lot of other mechanisms so that you throw in a lot of disks over there but at the same time you have a lot of mechanisms. So, one thing is you end up with a lot of control if you have to build your own sand, you're not tied up with these high-end pricing. The price is very low and if nothing, you can at least understand how storage works, okay? Because there are all these Amazon APIs and S3 storage and people are busy using snapshots and this and that and everything. If you want to not be in a position where, you know, you have to have a lot of bandwidth to push data and then play with storage you can actually try it out very cheap. You can actually try it out in a virtual machine and then set up a box. We'll look at that exactly. Here's what we did. I'll tell you a use case first, okay? I'll tell you the use case. Situation was this, there was a development team and they said, hey, we would like a few environments, okay, in which they want to test. So in our place, we use VMware. We use VMware a lot and the kind of fine method, we are trying alternate things also. What happened was one environment for this team comprised of nine virtual machines and that whole set took around 600 gigs of this space, okay? So that worked out fine. I worked very closely with the team. I've been a developer so I know how to, you know, think developer terms and everything, we work very closely together. One week set up time and the whole thing worked out very well. And then they said, hey, you know what, since this worked out so well give us another four. So that time I said, okay, fine, I'll give you another three and let me figure out over the fourth because my VMware server doesn't have that disk. So I went, I said we need budget, then, you know, we decided okay, look a regular sand, even a cheap one from Dell is going to cost us $10,000. So is that a justification? Do you think we'll actually continue with this more environments for a longer period of time? That was a question asked to me. I said, okay, I don't have an answer. I don't know at all. And then the, so I went back to the team and I said, okay, how about if you need two days, I'll get back to you. The thing is, I'm part of an open source project which started off in Bangalore. It's a distro called Bellenix. This was the foundation of the open solar distribution. Okay, it's something else and a lot of those technologies, I'm very familiar with. So I said, why not I just try out a box of my own. This comes up with a file system called ZFS and you see a very quick, interesting demo about it. So here is what I did. I said, look, let's get some random workstation. I asked my asset team, I said, you need three hard disk rights. At that time this a few years ago, one terabyte disk drive cost us 6000 rupees. I don't know what they are right. So this is it. Take a box, stick in three disks. Okay, and at that time I used Solaris 10, which was free then. Now I use something else called OpenIndiana. That is free now. And just put it in over there, run a few basic commands, make sure that the segments understand it. In fact, some of them are here today. I can make other talks. And well, we got ahead with it. And here's what we did. This is a little bit important. Avoid hardware trade. We'll come back to this in a short while. That slide was in the wrong position. So here's what we achieved. We put in this and we achieved this. All right. One box with three disks gave us around 2.72 terabytes of usable space. Okay. One environment was 600 gigs. I ended up creating up to 20 environments. And we ended up using just 750 gigs of disks. Instead of what would have otherwise been 12 terabytes. If I were to take a Dell sand, an MB-3009 sand, it's something that Dell sells. I would have had to provision for 12 terabytes of usable space. Here I provisioned for 2 terabytes of usable space. But thanks to a technology called snapshots and clones, I ended up using just 750 gigs. We'll see in a small demo how is this possible. Now, people were blown away when we showcase things like this in a team. That's because using technologies like snapshots and clones. See, snapshots and clones are very much like, I'm assuming some of you have used version controls and systems like Subversion and Git and things like that. So what happens over there is, when you branch, branches are just references to each other. It's somewhat like shortcuts on your computer. It's just that you snapshot and then you say, okay, based on the snapshot, I want a clone. It's just some stuff. We've seen all of this. It's a 5 minute demo. And your clone just needs the delta space. So what we found out was all our environments, they had this 600 gig common and the only additions that we added to them were our own application code and any data that the application generates. So all 20 environments across generate just 150 gigs of data data in this use case. Okay. And it worked out to my advantage. That's it. I also forgot to say we added 8 gigs of RAM on this computer. So this is what we achieved. This was actually thanks to snapshots and clones. So let's have a quick demo. I'll tell you what is it that you're going to expect and please bear in mind this is a very hands on sort of thing. There are ready made software open source distros out there which you can set up and just get started. You don't always have to type these commands but you'll see it's very interesting enough and simple enough. So what I have done is I actually run my own distro but I don't have the right video drivers so that you know I can project. So I'm doing this on Windows and my demo itself will be in a virtual box here. Hey, any questions so far by anybody? All right, I'll go ahead with the demo. So we go to virtual box here it is. So what I have done in this virtual boxes, I've installed a distro called OpenIndiana which uses this file system called ZFS. We'll see this and then we'll discuss all the theory and the advantages. We'll see stuff hands on. I've deliberately kept all the names of it. So I've configured this with two demo disks, two disk drives. There is a third disk drive which has the operating system. And there are two raw disks with which we're going to create storage right now. The speed that you see here is slower than what it will happen on even physical devices. So here's the thing. Where are we? We have a VM, we have OpenIndiana installed and there is one disk drive which runs the operating system. This is how the typical real world case is. Now what I've done is to make things faster I've gone and already added two more disk drives here. These are what disks, these are what we're going to use for our storage. So here is the demo. So the first thing that we do and remember on the ready-made open source storage distros all this happens via G1. Here we do it via the command line. So the first thing we do is we discover the two disks. So in the lineup spot like you have SD0, SD1, SD2. This is a BSD kind of a thing so the naming is a little different. We'll just use it. So let's discover what are we using right now. So I'll say Z pool status. Look at the pool a little later. So my drive is called C1 D0. This is the BSD kind of naming of devices. So I have this one thing. Is this font large enough? And then you can see it. So I clear the screen. Let's discover our disks. We've already discovered that our disk right now is C1 D0. Let's discover the other two. Clear. Command over here is format for whatever legacy reasons. So here we're just going to store it down a little bit. So C1 D0 is what our OS is installed on. The new ones are C1 D1 and C2 D1. Let's just name it as names in mind and actually apply it right. So look, C1 D1 and C2 D1. I'll clear the screen again. Scroll back up. Yeah, I'm sorry. Okay, thanks. So we say C pool create. Let's come up with a real name. We call it demo. Okay, and we'll take these two disks and just add them as a pool. Okay, create demo, slash dev, slash raw disk C1 D0. Okay, let me click. Okay, so this font too big. And slash dev slash RDSD slash C2. Okay, I just hit a command. Sorry, it just hold fast fast. But look. Okay, I just gave this command. I said, let us create a pool called demo. So this pool stuff, it's a pool of disks. Okay, this is how it is. These things are treated in a storage volume. Let's just take a Windows concept. In Windows, what you have is you have a C drive and a D drive. Okay, in the Unix, in a Linux world, for example, there is something called the LVM, which lets you create a volume. This pool is a corresponding equivalent of a volume. Okay, so what does this give us? We've taken two disks, let us just, sorry guys, there's some issue with the networking, I made a mistake. No, there's a whole bunch of stuff. I have to do this. So, he's burying me a little bit. Yeah, so look at this. So we now have pools. Shall I do that? Is it okay? Okay, cool. This doesn't take keyboard shortcuts. I'm not installed a virtual box keyboard additions. So, this is not my computer. Is this fine? Shall I make it smaller? Okay, let's go smaller. Okay. Okay, I get it, but it's just that with this distro, the virtual box additions are not present. It's a little bit of a problem. Okay, look, here is our pool. There's a pool called demo. It's made up of two disks. Let's see what storage that gives us. Okay, that gives us Yeah, yeah. Sorry about that. Look at this. So, there is an OS thing which is our pool, the root pool, when the OS is installed. And here is our demo pool of two gigs of 16, two drives of 16 gigs each. It's given us 32 gigs. Okay, these are sparsely allocated because I'm using virtual box, but if you had physical disks, this is what you would get. Now, here's an interesting difference between typical sands and a file system like this. What happens over here is it's some kind of a format as you go mechanism. Okay, so this storage is immediately available. If you have a number of disks of terabytes each, usually there is an exercise called initializing a disk. Over here it is initialized on demand. So, this pool is actually ready to use. Okay, let me try it. Yeah, I just tried content shift minus six. Not working. There is no keyboard at all. All right. So, what we have done over here is we've just taken two disks created a pool. Let's put this file system used right away. Okay, so this pool is called demo and we create something on it called data. I'm not very imaginative with names. So, we say ZFS create demo slash data. Okay, that's created. It's carved out some space. We can start using that space. Okay, so let's just dump a few files into data. Okay, it's mounted as slash data. Okay, one minute, one minute. ZFS list by my mistake. Okay, we just copy a few files over and fine. So, what does this give us? So, you created a file system. You just dumped in a few files over there and now let's look at what you get on these computers. So, the command is ZFS list. So, here's what you have. The R pool which has a number of file systems and even nested file systems and here is a pool called demo with a nested file system called data. And we've dumped in a few files over there, 83 mega. Okay, I just copied from user. So, now let's say you now want to share this over NFS. In the ZFS world and this is by command with a GUI it's even simpler. What you do is set share NFS equal to ON for demo slash data. We just set that attribute and now this thing can be accessed via NFS. Okay, so what happens over here is in these kind of setups you don't have a separate NFS server, separate permissions files or whatever. That data is associated with this particular file system. How is this cool? This is interesting because now let's say this, remember I said we were building a cheap box. We just took something over there. Let's say the power supply died or the motherboard failed or whatever. The thing about building your own sands is your own enterprise support. Okay, you don't have anyone to call if you're dependent on yourself. So, what happens in a typical sands is your hard disk have to be plugged in in a particular sequence. Okay, if on the original box it was disk 0, 1, 2, 3, then when you go to a new box it had better be the same sequence. Okay, and that's one of the things that hardware raid imposes. What happens over here is it doesn't matter the disks have all the right labels and everything so you just remove disks, attach them in whatever random sequence you're just fine. Okay, and if you had set NFS settings and so I've said you share NFS. If you had to share it with Windows computers you just say set shares SMB equal to PON. You're done. It's ready to be shared on a work group sort of a setup. You can also integrate these with your directory so you know you have fine grid controls and all those things. Now the fun bit here is because this was set on that file system you unplug these disks, plug them somewhere. All that configuration goes along with your disks. All the configuration goes along with the file system. Okay, you don't have anything special to set up. Now because of this one feature what we have now done at work is we have done away with the operating system disk also. We just use USB drives. We come up with such boxes, install operating system on the USB drive plug it in, configure our file system, it's running. To more of the USB drives failed or whatever we don't care. Getting a fresh USB drive plug it in, mount all of this, all the configurations are ready to go. Okay, it's all attached right there. This is great for me because see when you propose things like this in a, if you do it for fun for your own group that's great. But if you do it in a commercial setup you know there are people who will have some amount of concern about hey how reliable is this. If you're dead tomorrow where do we go? You know there are questions like at the truck filter and what not. So when you give demos this way right you tell someone you set it up you mount it on VMware, you share it on Windows and these are the commands you use. You create pendrikes, you unplug this plug it there do it all. A half an hour session and people are up and running you know and that's one of the good bits about such home grown things at least with this technology that it's very easy to sell and get some traction. So you got this right, we did this and now we come to how we have, how I had achieved that other bit about 600 gigs shared with 20 people and you know you just get 750 gigs of delta storage. So that's the concept of snapshots. You just look at a snapshot, it's a very simple thing clear and our file system was demo slash data. So I'm going to say ZFS, snapshot demo slash data at let's say basic. So this is a basic snapshot and I'm going to share this amongst people. So I have basic a marker snapshot that's it and now we have some stuff at demo slash data 83 megs of it. So let's mark a clone. So let's assume this was one environment and now we are going to mark a clone. The clone is another simple command you say ZFS, you say ZFS clone snapshot and you want another copy of this. So you say on demo create let's say first okay. I just read something over here. Now when we say ZFS list 5 gram data look okay. See here's the interesting bit okay skip this but look here's what we have we have data and we have this thing called first okay. Data contains 83 megs refers to 83 blocks, 83 megs worth of blocks but first refers to 83 megs worth of blocks on the disk but it's actually using just one game okay. Look at this fun bit. So I'll say ls slash demo slash data. There's a bunch of files okay. Now I'll just clear this and say I said ls demo slash data I got a bunch of files. I'll just go to the other thing that we just created okay. We called it first. Same files are in here there are a few files here. You get this right. So even though it has referred to just even though it uses just one of actual disk space it actually refers to 83 megs it is actually it's not soft links. Let's have an impromptu kind of a thing. So look at this. Let's say this is this actual space used on disk okay. And here is my file system entry table where I say all these files together use this much of disk. Now what happens is we say okay we want another clone of this so you mark a snapshot which is an entry saying that these many blocks together comprise this snapshot. What you now do is you say I would like a clone based on the snapshot. So you're not cloning anything on disk okay. You're just creating another kind of a reference set of entries which refer to the same blocks. So it's almost like a hard link okay. It's almost like a hard link. But there are no iNodes here. No iNodes. There is no iNode concept here okay. But it's almost like a hard link except well you could call it a hard link to say. So the thing is now when you make changes to this file system it will just take some more space okay. So look we are in this demo right. Let me just delete all files over here. I just want to delete all the files here. There are no files here. But when I go to data I still have my files. That's because you have a second reference where you say I'm not going to refer to those blocks anymore. Which is what the original set is still referring to those blocks. So that's still in place. So now what you could do is you could start coming up with a whole lot of fonts this way right. So we are done first. We'll simply create a second. So for example okay. So we are taking this thing that we created first we'll just create a second. I'm sorry about this whole resolution business space. So we have created a first. We just create a second. Then that's the time we say ZFS list five demo. So we have a first and we have a second. Look at the first thing. It had deleted a whole lot. You'll see some funny stuff here. This takes some time to synchronize something about the nature of ZFS. But look at this. Your second is now again just okay. If you were to add files to this this would take exactly that much time. So let's copy something so let's look at this. So I copied it into data. So look at this now. Data over here data itself okay. It's now 85 megs. Data itself is 85 megs. But second which came from that Palestine copy of that snapshot refers to data. So data refers to this much plus its own additions like it's a few more things done here which data are first. This was what we had started on. So what happens is with concepts like this right with facilities like this you can start creating cheap copies for people. So I'll give you one example one thing that you could do for example and this is what the distro that I told you about we are doing this over there. We have these build zones okay. These are like open VZ or Linux containers okay. The equivalent of the Solaris world is called a zone. So what we have done is we want to provide a pristine build environment per zone and we want our developers and our tools to be able to add and remove as they want. So what we do is we create a file system. We install all the right things over there and we mount that into zones and that's it. It's a very cheap copy suddenly okay. If anyone wants they can go and add some things. For example someone says hey I don't want GCC 4.2 I would like to have 4.7 or let's say 4.6.7 someone wants to add that. They can do that. All that they will take is just that data. In our VMware situation that I spoke about you give someone a fresh environment they start up with what they think is oh I got 600 gigs all to myself. And they say in this project it was you know we were writing some plugins for Microsoft Outlook. We talked to Exchange and done a whole bunch of things. So what they would do was they would say okay we are using Outlook so and so version let's upgrade it. So the QA person is free to upgrade. You know they are free to do whatever they want because of a very interesting thing. See we had snapshot and data right. Now let's say someone comes and says hey you gave me an environment you know we would like you to you know preserve the changes that we made in it. So what we do is this second file system that comes up we just snapshot it. See it's a file system so you can snapshot it. Okay I've got just 5 minutes left. So what happens is you have snapshots and you have rollback. Okay you have snapshots you have rollback. So you can rollback just like that. Okay so for example let's snapshot the file system called okay ZFS snapshot demo slash second at let's call it ABC. ABC snapshot I made a calculation. Yes this is the right syntax. Mark the snapshot. We go to demo slash second. Okay we just create a file here called second also comes from that. So well it is right here and our snapshot a second this file system was snapshotted as ABC. So suppose someone makes changes and they say you know what I just want to rollback for Palestine coffee. Mark a snapshot for them when they say it's good and then you simply rollback to that snapshot. So look at this ZFS rollback I say let's go it's gone. Right so in our case our QAs were free to deploy code or do whatever they want them set it up and stuff and then comment tell us please snapshot that this is Palestine. I'm going to do something destructive in mind one. So they do their destructive stuff comment tell us okay take us back to Palestine state we run a command like this you're back at Palestine state. This is because even though creating an environment and mounting it you saw the speed at which a clone works at which you say sex share. Very much similar speed is at which you know you can mount it on VMware if you have automation in place. But the thing is application deployment is not so fast. So to avoid that time it takes snapshots for people that way. And this way because you see how those deltas work right this way we found out that our deltas are just 150 gigs. So we didn't have to provision for 12 terabytes of usable space okay we just did something cheap like this it just worked out for us. Right I'll get out of this demo there are one or two slides and our demo talk should be through. So this is what we want to do now and I think just shut himself off okay never mind we just go on with the show. So what are we going to do in future what we are going to do in future is there is this we are getting into a space of big data analytics and stuff. So imagine this person comes and says hey I've just with a lot of pain I've created a combination of records and everything. This is my test database in Oracle and I would like you to back it up. So I'll tell you an actual use case at work okay we have a setup where there is a 3 terabyte file just imagine a test file which is 3 terabytes. So you would want to keep a pristine copy now someone says okay I would like to play with this data now I need to run something destructive. Just copying 3 terabytes is going to take about 6 hours okay and then they say okay throw it away and give me one of the copy that's going to take 6 hours again right. So instead if you have mechanisms like this okay you keep that 3 terabyte you create it with great pain whatever keep it mark a snapshot and then snapshots and clones. So now you could for example give 18 copies to people you could tell everyone hey you got a Q environment here's a copy of 3 terabytes for you. You got one more here's a copy for you you made some changes want me to snapshot it no problem and we've seen that you know these deltas in these test cases are at worst around 10 gig. So imagine on 3 terabyte if you have to make 10 gig of changes why spend another 3 terabytes at 18 copies of 3 terabyte that's like some 40 terabytes of usable space right and at about 6,000 dollars a terabyte you see the numbers like quickly add up. But with means like this you can quickly do it and especially if you know scratch and throw away environment or whatever this is very cheap to do. So how much this cost us right a computer which was just lying around and 3 disks of 6,000 rupees each which was around 18,000 odd but this was around 3 years ago. Nowadays my team tells me it's like around 4,000 odd per disk for us. We did it, it was very cheap, we went ahead with it. So the next thing that we are looking at so one thing is database virtualization. The other thing is pushing it through the organization. So what I know as a problem is you know increasing confidence levels. So what we do now is in each of us I've told us just get 3 disks set up a box, try it out for yourself. Everyone has reported success they can all get it running in about 10 hours worth of playing with it. And with this confidence level we are now going to buy some, you know it's called a jboard chassis. You get a chassis in which you just plug in disks. So imagine being able to plug in 24 disks. The company just might use it you know because once you have these kind of advantages and it's all just test data and anyway it's a lot of fun to be high. So these are our future directions. And well there is no display so you can, you can. Yes, yes, yes. So like you saw ZFS snapshot, you have ZFS send. So you can do funky things. There's another thing we did, okay. Let's say everyone moves to ZFS for example you have virtualization or whatever. Yeah just a minute more. What you can do and this is done by a number of people in other countries at least is you mark a pristine environment, mark a snapshot, do more work you say hey even this is good you mark another snapshot. Now what you can do is you can export that snapshot as a file system to others and they can attach it and start using it. And then when you want to do an update, mark another snapshot, send the delta so that people can attach deltas and just get started. Oh yeah sorry, look mystic just like you know we had done NFSN SMB. You can also share it as iSCSI. Okay so the sand typically and strictly speaking is the iSCSI protocol and you know XCOE and things like that. So iSCSI is also supported. That's also how we use it with VMware. You have iSCSI support as well. Your question? Yes. Again I made a mistake I should have told you this earlier. These kind of things. If you have three, I created an aggregation, okay. Just two disks even one of them fails, the whole thing is gone. But that's not what you really do. You use something called RAID Z. There is something called RAID 5 but there is a problem over there called the RAID 5 right hole that in certain circumstances you lose data. So RAID Z is different sort of algorithms where that doesn't happen. So like we said Z pool create and demo you would say Z pool create RAID Z demo and add three disks. You need three disks to start with. So any one of them fails you can attach a new disk live and start fixing it. So I'll tell you what for those of you who are interested about once in three or four months I conduct these sessions at my place. We are part of a group called the Panglo Open Solaris user group. Open Solaris is like defunct but a lot of us have moved on. We do our own things now. We conduct these kind of sessions where this session is usually around three hours. You do hands on stuff and you see things on your laptop and things that way. So just let me know and I'll just put you in touch with the right mailing list. You can sign up there whenever we do a session like this you can drop by with your laptop and try things out. So this is just ZFS and storage we do things like this with detrays and networking and things that way. So that's it. Any other questions? One minute he's got a question please. Okay ZFS has been is natively available and supported on the Open Solaris kind of families. FreeBSD it's supported on these. There is a Pune based company who have actually done some amount of work on top of some other project and they have RPMs which run on the Red Hat platform I have not tried that yet. It's for Red Hat and I think for Debine they certainly do not support Ubuntu over there. This is one thing but there is a separate project called the ZFS fuse project. So there is a project called fuse file system in user space. So you can actually try out don't use it for production stuff because it's not as stable as these kernel based things but you could actually try it out on even your Ubuntu box you know you run ZFS fuse and you use it create disks whatever sparse files whatever you want you can actually try out all these things there. This is what I have heard. I have not seen it with my own eyes. I have not tried it with my own hands. But there is enough talk about it. Yes. Please don't be tempted to put it in production and you know depend on it though. You can do all of these things. Those things are only as good as the underlying technologies and underlying reliability. I am telling you this for a reason because once after we had a session like this about one and a half years ago one guy started supporting people in his organization based on a virtual work setup. And we had to tell him don't do that. It's very tempting you got it working and you tell someone hey just enough to mount it work hey great. I am snapshotting people having fun. It gets as a sys statement I can tell you experiments become production very quickly. Okay. And just be careful. Who is not? Oracle Solaris 11 does have ZFS. Okay. I don't use Oracle Solaris 11. I have my own kind of stuff so I am fine with it. Like there is this project called Open Indiana. That is the thing I showed you on virtual box. It's a distro called Open Indiana. I use that for the demo. Works well. In fact my instances at work are no longer Solaris. I have replaced all of them with Open Indiana because I work closely with that group and I know that there are people giving attention to it. Time is up. It's five minutes to one. You had a question. Alright. Sure. Okay. Thank you.