 Good morning everyone. Hope you're enjoying your time here in Vancouver and welcome to the last day My name is Simon Doddsley and from Pure Storage I'm basically in charge of all of our open source integrations and today I'm gonna give you a high-level 101 on Cinder replication For those of you who don't know, Cinder is the block storage project within OpenStack And I've been working with the Cinder community for about nine years now So I've got a family good idea of what's going on. I hope I See some of the Cinder community here as well. So they're gonna prove me wrong. I'm sure a number of times So very quickly OpenStack and Pure Storage we have been contributing to OpenStack since the Juno release, which is the second half of 2014 Initially we had a Cinder driver for our flash array product and That basically supports a number of different protocols. I SCSI fiber channel recently We've been adding NVMe capability and the latest one will be NVMe TCP, which will come out in the Bobcat release Even though this is a conversation about Cinder, I might as well mention we do have a Manila drivers as well They came out in a Xena release and over time Both myself and a number of other employees at Pure have contributed to over 20 different projects within the OpenStack umbrella So we're very keen on on OpenStack. We're very keen on open source Methodologies and I'm a big open open source proponent within the organization Just for Brian there. We've bottom right corner. We have the the Cinder logo The horse is named Argo and we are all officially Argonauts So let's talk about Cinder replication It was very much designed for a specific use case in its current incarnation And that was basically your back-end storage device blowing up Okay, literally smoking hole in the ground That was the design scenario and what we do in Cinder replication is Gives you the ability or as an admin gives the admin the ability to fail over from that smoking hole in the ground to the replicated array and in doing that any volume that has been replicated and We'll discuss how that's replicated shortly All those volumes remain available to Nova Now you do have to reconnect Nova because obviously, you know the back-end connection whether it be icicles your fiber channel or whatever Has to be reconnected different IQ n's different WWE n's things like that, but your data is still around Now if you don't if you have volumes that are not been replicated then those become unavailable And there's obviously no way that Cinder can actually do anything with this volume that's in a smoking hole in the ground Now one of the things that has been added recently or not too recently, but after the initial incarnation of replication Was the ability for the admin to freeze and unfreeze that back-end So if you're in a fail-over scenario, for example, and you've got your replicated volumes on your second array And you know, you're gonna be able to bring your first array back up You want to stop people doing snapshots on that remote array because snapshots can't be bought back When you do a fail when you do a fail back, okay, so you can freeze and unfreeze that back-end Again, the freezing is richly more around doing things like snapshotting You can still provision if you want to do that actually no, but freezing stops you provisioning as well. I think creating new volumes Looking for no one doesn't know Okay but basically the high-level state of Replication today is we're now at version 2.2 codename tiramisu. That's been around since the pike release Currently there are 28 storage drivers back-end drivers out of the antelope release that support replication Now not all of them support all the different types of replication. Some of them do some of them don't again You need to go to speak to your specific vendor to find out which particular types they support Again, as we mentioned before we have this ability to fail over and fail back But the interesting also, you know people ask me this question a lot Does the target back end that replicated back end does that have to be under Cinder control? The answer is no it doesn't you can actually replicate to an array that is outside of Cinder's understanding There are people who use that scenario To effectively do dr replication to other open stacks and clusters in different sites You know there are some things you need to do around that, but you can do that The four Items I've got listed here replication device replication enabled Group replication enabled and replication type These are the really important parameters within open stack Cinder about how you actually configure Replication what capabilities are available to the administrator and the users and what the volumes are capable of being been capable being done to them So the replication device is configured by the admin that goes into the back-end stanza in Cinder.com And I'll show you an example of what that looks like in the next slide And then we get into the volume types and the group types So we're talking about volume types you create a volume You have a volume type if you want to replicate a device Then you have to have replication enabled and you have to have replication type Enabled within that volume type that then allows the Cinder scheduler to work out which back-ends are Capable of doing that replication and creating the volume on those on those back-ends and obviously Automatically setting up the replication of those volumes And the other thing to note is that replication can only occur between Back-ends of a similar type So in our case pure to pure or if it's a power max to a power max or her tachy to Hitachi you can't do Different back-end vendors Replicating to each other it doesn't work like that Some people think it might do but it doesn't unfortunately And also we've got it configured. We know you can do within Cinder You can actually do multiple replication targets So you can have one to many replication Should you want to do that or you can set up different volume types that replicate to one array or a different array? So all that there's lots of different things available to you Now let's talk about the different types of replication. There are basically two With a third one that we've sort of recently added in our chalk talk about but the first one is asynchronous replication This one is basically What most people consider to be like snapshotting replication? It can be done over any distance whatsoever I mean we have customers who are replicating pure arrays literally around the world as long as there is an IP connection That's that's routed between the two arrays, then you can replicate volumes across those And as I mentioned before the sin the target array does not necessarily need to be under Cinder control Now how frequently those volumes are asynchronously replicated is completely down to how you configure That asynchronous replication Different vendors do it in different ways different vendors have different capabilities and they and you know depending on your vendor You would need to go look at this in the configuration documentation for that particular vendor to find out what the capabilities are But effectively what you're doing with asynchronous replication is a point-in-time copy Slash snapshot depending on the vendor and how they're implementing it at a particular time so You would say to open stack? Volume snapshot it will go and create a point-in-time snapshot and it will move that volume over to the the remote array Now those can be individual volumes or they can be consistency groups or they can be generic groups And that's the different types of capabilities for those Now when you've created that asynchronous snapshot Effectively you are outside of the company and to do anything with that snapshot. It's outside of the control of Cinder to a certain extent if you want to sort of Take that snapshot on the remote array and convert it into a volume that you can use then you have to do that using the array vendors tools and CLI's and command sets and things like that The line at the bottom here is I talked about the replication device before This is what looks it would look like for a pure array in a In a Cinder.conf stanza for a pure backend. We specify what the back-end idea is Again, that could be inside or outside of Cinder control. You specify the IP address You also specify an API token I didn't have room to put it in there But you would put an API token in our case but some vendors. It's a username password. There are different ways of doing it And optionally you can put type async The type defaults to async. So if you don't put anything there, it will always go to async The next type of replication is obviously synchronous replication and again This is more to do with data center or metro replication between two arrays that are reasonably close to each other and Where is full support actually within Cinder for active active Cinder and synchronous replication? This type of replication does come with cost you need to have some sort of infrastructure connecting these Devices together whether it just be two arrays in the same data center talking to the same fiber-channel switch or it can be Over DWDM, you know Metro to two data centers across the city If you are doing distance then there are limitations and again those limitations will be completely dependent on The the vendors back end that you've got so for pure for example It's an 11 milli latency 11 millisecond latency as your limit for synchronous replication for Hitachi They call it 300 kilometers But whatever you're doing in synchronous replication It really depends on your vendors back end and it depends on how they implement Replicate synchronous replication themselves and their drivers will implement their own specific Feature capability so for pure storage we're leveraging what we call our active cluster capability It's Archie would call it true copy and Dell EMC would call it sRDF Again simple example at the bottom of what the replication looks like the only real difference is type is sync And you have to specify that if you want it to be sync The optional of uniform true is very much a pure specific one And if you want to know more details of all that is where you can come talk to us at the end but you know Every vendor is slightly different. You must go read their documentation about how to configure their replication device parameter Now I just want to add this one the end pure of just literally released from the antelope Release we've got a combined sync and async capability now. We're calling it try sync So what you have is three-site array replication. So you do synchronous replication of a volume Between two arrays, but that volume is also Asynchronously replicated to a third array somewhere else in the world. It's very useful from a sort of a dr Bunker type solution disaster recovery Solution There are some caveats around configuring it for us We support you have to have exactly two Replication devices in the back end stanza one sink and one async obviously and you also have to tell it that You're using you want to use try sync Okay, because if you just add the two async and sync definitions You could set them up as just an async or a or a sink is two different volume types And you student you can still do that But if you have try sync enabled then you've set that parameter then you can do this three-site replication for an individual volume as well Tell people understand what's going on because he's starting to get complicated now We've actually added a replication capability field into the The pool capabilities so when you do and get pool you can actually see what the replication capability is of the array There's a lot more detail about this for specific configuration I've got a link at the bottom here my website and my blog site so you can go have a look at that. Should you should you want to do that? What's next for replication in Cinder actually is nothing planned Unless we start thinking about whole new use cases and not having that smoking hole in the ground scenario Then you know, there's nothing really planned, but if you think we're missing something. We're missing a trick Please reach out to me reach out to the Cinder community We're more than happy to take out take suggestions and if it's something that's You know worth looking into and we think would be a very viable use case Then certainly it's something we'll consider for you know later releases of Cinder That's basically it from me. Thank you very much If you do want to come ask us any questions about pure and our storage and replication in general Confinders were at booth a 12, which is literally just over the back next to red hats And again, thank you very much for listening and enjoy the rest of your time in Vancouver. Thank you