 Okay. Okay. So let's, um, let's get started. Uh, thank you. Thank you. Thanks everyone for, for being here today with us. Um, today's talk is about Seth and, uh, an open stack. Uh, my name is Sebastian and I work for Red Hat. Uh, I joined Red Hat from the, uh, in events acquisition last year. I'm a senior cloud architect at Red Hat now. Um, my two domain of expertise are a Seth and open stack. And, uh, apart from this, I always devote a third part of my time to, uh, to blogging. So here's a little bit of a Seth promotion. Uh, uh, yeah, Josh. Hi, my name is Josh Durgan. I'm the already lead developer. Um, I'm with Red Hat now. I was part of Big Tank and dream of, for that, uh, working at Seth since I graduated college. Um, Sebastian, tell us a little bit about what we're going to do today. Okay. Awesome. So today's agenda, uh, for, for this review, we're not really familiar with Seth. We're going to spend a little bit of time explaining what it's Seth, what it does, how it works and why it's so cool. Um, then, um, what Josh is going to do that, then I'll be explaining what, uh, what happened during this, uh, Kilo cycle. What's our roadmap for Liberty and, uh, and beyond that too. Then, uh, Josh going to jump into what's happened, uh, what happened into the last Seth release and we will go a little bit on the roadmap of Seth as well too. So what happened in Hammer? Louder? Okay. Okay. So yeah, we're going to, we're going to go through what happened into Hammer and we, uh, we're going to give you a little bit of the roadmap for Infernalis. And now, and then we will finish with, uh, get the best cloud configuration. So we will basically go through all the components of OpenStar. I can explain you why you should configure them. That way to, uh, to, to be connected to Seth. So yeah, Josh. So what is Seth? Well, Seth is a software defined storage system that's open, all open source, designed for highly scalable operation, um, and designed to run on any kind of hardware you have. That's basically a, um, a couple of sentence summary. Um, the main components of Seth are all built on top of a low level object store called Rados. And on top of Rados, we have three different interfaces. That you can expose storage. Rados handles all the low-level details of replication and consistency, um, for you. So that, um, really high-level components like a block device, an object store, um, is much easier. So the three main components on top of Rados are Rados Gateway, which provides an S3 or Swift like interface, um, RBD. It provides a block device for virtual machines or for, uh, other workloads. And the file system called SethFS. Which is, uh, the only component that is not completely stable yet. So what, how does Seth integrate with OpenStack? Well, it integrates with a number of services, um, at the most basic level, Cinder, Glance, and Nova, all consume storage, and they can all be backed by RBD. For a long time now, um, since, uh, RBD has supported thin provisioning and cloning, you've been able to do copy and write clones from immediate distorting Glance on RBD to, uh, Cinder Volumes or Nova and Feveron Discs. The Rados Gateway, I think, providing a Swift interface, um, integrates with Keystone so it can use Keystone for authentication. And, uh, more recently, it has, um, also a kilometer entering the picture later. We'll talk about. Okay, so, uh, what happened in, uh, in Kilo and what's going to happen in Liberty and beyond? So, well, to be really honest, this, uh, this cycle was a little bit disappointing because we, uh, we didn't get enough time and, uh, enough attention as well. From, um, for the reviews and things like this to get our patches merged. So what we started doing is to, uh, we virtually implemented, uh, RBD snapshots, uh, for, for QMU, uh, because we, um, when you snapshot an instance, basically you, we use QMU EMG and then it gets locally on the hypervisor and then it's, uh, streamed into Glance and then uploaded into CIF, which is extremely so and really inefficient. Um, and then we also have to provision a certain amount of space on your hypervisors to, uh, let this, these snapshots happen into. So what we want here is to use RBD snapshots instead of, uh, QMU snapshots. So the way we implemented this now is just, uh, at the Glance level. Um, nothing has been merged yet, uh, into Nova, unfortunately. We have, uh, it's not directly connected to, uh, to CIF, but it's something really important that we need, uh, which is the base, uh, this image conversion in Glance. So, um, CIF has the ability to work with several image types, uh, QCAD 2, ROW. But the main issue with that is, uh, it's, uh, given that CIF has multiple capabilities for, such as copy and write clones and things like this, things that we already implemented into OpenStack, uh, if you want to really get advantage of all these features, we really want to use ROW images. But, uh, the downside of this is that it's really difficult to force all of the users to upload ROW images. No one wants to upload 20 or 50 gig images and no operator wants to get, uh, 150 gig images, uh, getting into the clouds. So we had to come up with a way on directly importing, uh, while importing a new image, it's going to be QCAD 2 and just to on the fly, uh, convert it into ROW. So it's just, um, the first bit of this, uh, and, uh, we're going to continue that work into the, uh, the Liberty cycle. Uh, we, um, we now support DevStack, uh, well, we have been supporting CIF DevStack for, uh, for, uh, yeah, one release now. And, uh, one thing that I had to do, and well, this is mainly why I did it, is just that, uh, I had to run several benchmarks on an existing CIF cluster. And then I wanted to rapidly get a, uh, an OpenStack environment happen running. So what's better to, uh, for this to use DevStack. So now with DevStack, you can, you can simply bootstrap your, uh, your resources with DevStack and then connect to an existing, uh, CIF cluster. So it's, um, it's just really nice to quickly bootstrap new instances and then start, uh, doing benchmarks. We have, uh, CIF on the gate now for, for the CINDA driver. We have the Cilometer integration into various gateways. So it's basically like a public mechanism where you, um, um, where, uh, where Cilometer is going to in, uh, fetch some of the information provided by the RADOS gateway, and we just provide metrics from that too. Uh, we support RETAP to change QS on the fly. So once again, it's not directly connected to, uh, to CIF, but since CIF doesn't support any QS at all, it's always good to implement the Libre throttling. So now let's say you, you create a new volume type and then you associate QS on this. Then if you RETAP, then you would change QS on the fly for that volume, which is, um, really useful. Future proofing for new RBD features. It's something that would be at the core level, and it's just a way to, uh, detect new features for new RBDs. But, uh, Josh going to walk into that for what's, what happened into Hammer and what's happening into the, um, next step release, uh, the new RBD features. We, we fixed a couple of bugs to, uh, the first one was NovaEvacuate, uh, which, which was kind of critical to critical because, um, many, many of the operators now will rely on, on CIF and they all boot instances into CIF. And then if one of your hypervisors goes down, you really want to do NovaEvacuate or NovaEvacuate an instance, just to reboot strap that instance, uh, on your OpenStack environment. So having this broken was a little bit, uh, a bit of a problem too. We fixed some of the standard issues, white cloning of volumes or creating a volume from a volume and get a clone, but with the wrong size too. So this was fixed, um, same goes for, uh, Nova instance resize, uh, we, uh, while reverting this, uh, we, uh, but we had a bug, so we had to fix that one too. The NovaDispace reported by the hypervisor itself. It was something that, uh, was introduced into the last release, but, um, nothing was done at the hypervisor level. So basically when you do something like a Nova Hypervisor stat, you get, uh, memory available, the distance, and things like this. And in a situation where you use image type connected to Ceph, so you don't really need to, uh, read the layout of the, uh, of the base file system of the hypervisor, but you need to go into your Ceph cluster. So that, that was still reading the, the, the file system where Nova was, uh, was leaving. And, uh, yeah, we fixed something on Cender too when we are creating, uh, create a, um, create a volume from an image, uh, which has new capabilities like, uh, coronal disk, uh, and ramfs. Uh, our roadmap for Liberty in, in Nova, so we, we're gonna, well, it's something more generic that will be implemented into Nova, uh, forced detaching volume, but we, uh, we might have to do a bit of changes, uh, into Ceph as well to support that, uh, forced detach operation. Uh, the use case for that is that, uh, you might have instances running on the one hypervisor, and then this hypervisor dies, and, but you want to force detach volume because you can maybe reattach that volume to another instance. Uh, we have to fix the human throttling for ephemeral, ephemeral disks, uh, it's, uh, actually something really simple, and it's a patch that, uh, has been around for like a release, but didn't get much attention to, uh, multi-attached support for RBD. Once again, it's a more generic API thing, uh, but we have to do a couple of modifications on the RBD code just to support mid-attached volume. Uh, that's, uh, that's a really interesting use case because, um, let's say you have, uh, you have an application that can work with just, uh, well, that can access the data only, so that the application just doesn't write anything. You can simply boot an instance, create a volume, put up your data in it, the application's data, and then you can attach that volume to several instances. So that's, uh, that's really good. Uh, we, we are planning on finishing the Nova snapshot at the RBD level instead of the QMU, uh, two. For Cinder, uh, we, we're gonna support, uh, volume migration, so it's, uh, basically work really close, uh, closely with the support, uh, the retype support. So you have a volume type, and then you change the type, and then it's a different backend. So we have several options for this, uh, either it's from, uh, a sef, a single sef cluster to a sef cluster, um, well, a pool from a pool, from a pool to a pool, and then this can be from one cluster to another, or this can be from sef to anything else. NetApp, whatever. Uh, we, we want to be able to import export snapshots, uh, or volumes for, for RBD2. Um, it's in really interesting use case because, uh, people might be using, um, uh, visualization solutions at the moment, just like things like Proxmox, for example. And, uh, they want to maybe move away from Proxmox and just to, uh, move to OpenStack. So they already have their volumes ready so you can simply see them create, and then you will get your own volume registered into the database. You don't, you don't need to do anything really special. And, uh, and the opposite is true as well. If you want to leave OpenStack, you want to wait to get back all of your volumes. We, we have to update also our sender backup driver for sef, because, uh, well, just to be compliant with the differential API, which is kind of unfortunate because we were already doing differential backups. So basically the sender drive, um, the sender backup driver for, for sef, uh, was clever enough to detect that, uh, if it had to do complete backups or differential backup, but now we have to rework that a little bit. Speaking about backups, we also have to implement the differential backups, but, uh, orchestrate it so we can say, okay, let's just take a snapshot of that volume and once a week and get, uh, keep that retention, three volumes in, or something. And of course we have to implement also the multi-attach at the RBD level. So it's something that closely works with, uh, uh, between sender and nova. So if we do this, you know, we have to do this in sender as well. Uh, a little bit beyond now, we have to support consistency groups in sender. Well, it's something that has been run for two weeks now in sender. And we, uh, well, we haven't done any work on that, basically. For this review, we don't know. Consistency groups are a way to, to get a consistent, well, consistency groups, but, uh, consistent snapshot for a given environment. Uh, let's say you have an entire application and you just want to snapshot everything at the same time. So you get a consistent snapshot of your, of your application at all the different layers. We want to implement the RBD mirroring, which is a feature that Josh is currently working on. So it's a staff feature. So we want to expose that to OpenStack. We somehow want to integrate with Manila, the distributed fast system as a service. By default, the easiest thing we can do is simply map our RBD blog device and re-export it with staff, with NFS, sorry. Or, uh, when staff FS, well, will be considered as ready, we can still use Ganesha in front. It's just to provide a SIF and NFS re-exports. But you also want to see Sage talks, uh, to, to tomorrow. He's going to be discussing a little bit about, uh, that, that subject. So, yeah. So now I'll let this to Josh for us. Yeah, so what's happening in SIF? Um, a number of things happening in, in the land of RBD, especially, um, in the most recent release of SIF is called, is Hammer. And there are a bunch of optimizations merged there. At the RBD level, on the radar's level, there are a bunch of hints added so that you can have, uh, like things like RBD export and import, not use of page cache on the OSD or the local cache on the client side. And those are allocation hints so that, uh, tiny writes don't start fragmenting files. These are now sent with every write. RBD does the, uh, allocation hint that's passed down, um, through to the underlying file system on the storage server so that, uh, I'll file an exapast with, we know is going to be an RBD object which is going to be four megabytes in size. It gets F allocated to be four megabytes and doesn't get fragmented. So that kind of, uh, helps with, uh, keeping performance consistent and not degrading so much as, as the file system ages. On top of these, we also have, um, support for a bit of read ahead at the RBD level. Uh, this is necessary, especially when you're, um, booting a version machine because at that point, um, the kernel isn't, isn't there yet and to do read ahead and do a, it's, it's page cache. So you don't have a BIOS or, uh, an early, uh, bootloader doing lots and lots of tiny sequential IOs. So there's now read ahead and the RBD to, uh, facilitate a larger reads there and, um, increasing the speed there. This helps in particular when you're using an older guest that uses the ID bus. We found that booting with, with read ahead enabled with ID buses actually increases or decreases boot time by something like 50 to 80 percent. For, uh, more, more advanced buses that they, they already do larger reads. So the gain is less there, but it's still present. In terms of, um, clones, for a long time now we've supported copy and write. In addition, um, some folks from the NUTT University in China, implemented, uh, support for copy and read. This is kind of a more specialized use case than copy and write in general. Um, it's especially useful if you have an unusual scenario in which you have two different stuff cluster, or one stuff cluster, spanning two different sites and you have a high latency between them and you know that you're going to have, clones in one side that are referencing a parent to another, but you eventually want all data from the first site to be present in the second. So the copy and read went, um, whenever a guest does a read of an object, RBD will go ahead and copy that object entirely from the parent instead of waiting until a write happens. So that's kind of an opportunistic copying of all the data, but allowing you to use the volume immediately. The next two, um, really large features in RBD are kind of more groundwork, and, um, enabling RBD to be ready for other features like RBD mirroring in the future. So the first thing is exclusive locking. This is basically to ensure that only one RBD client can write to a given image at a time. This is generally enforced by Cinder, but it's really good to have guarantees at the lower layers so you can be sure that your data is safe and that you don't have inconsistencies in, um, new metadata things that we might be adding to RBD. So one of these things that we've added, um, is actually keeping track of which objects exist in an RBD image. And this is, we're calling this the object map. This would be, um, basically a new feature that you enable when you create an image right now. And it basically lets you have, um, much better performance for clones, whereas before you have to go ahead and read. So the object map basically keeps track of which objects in a given RBD image exist. And this enables lots of optimizations for things like, um, exporting and importing an image. Um, fleeting an image is much faster because we don't only need to delete the objects that we know exist. It also improves performance in general for reading from clones. Since, if we don't know which objects exist, we have to go and query them to see which level in hierarchy of parents and clones an object exists at, whereas with an object map, we actually know exactly where we need to go. So we don't only have to make one request there. Right now, these exclusive locking and object map features have to be enabled when you create an image. But in Infernalis, um, you can actually enable them on the fly dynamically. And this is something that we probably might end up backporting to a later Hammerpoint release since it makes it more easy to introduce an existing environment. But, um, we're ready for that to kind of stabilize. A number of other things are already, um, ready for Infernalis not coming up. One is, um, enabling you to specify more, um, RBD specific configuration options like the cache size of an image on a performance basis. This is storing it actually in a stuff cluster so you don't have to mess around with passing things all the way through Livevert, QMU, or through Nova. It just makes it very simple to, um, kind of customize individual images or volumes you might have. Kind of a lower-level feature is, uh, being able to flatten an image normally today. If you go and flatten an RBD image, if you created snapshots prior to that, these snapshots would still be referencing any parent image that they might have. But, um, with a support for deep flatten, you actually, um, when you flatten an image, it'll flatten the image as well as all the, all the snapshots of that image. So you don't have to worry about, um, deleting snapshots before you can delete and clone parents. You can just flatten the clones and break the dependency entirely. So along with Object Map, um, in Infernalis, there's another optimization added to that to enable faster, um, RBD diffs by keeping track of exactly between which snapshots, um, objects changed. And this also allows us to, um, report what's actually changed in image between snapshots and what's changed between a parent and a clone much, much more simply. So that's, um, present in new RBD DU command, which serves you like which, how much disk space has been used by a given snapshot, uh, or an image based on Object Map so as to be able to be computed very quickly but just looking at the metadata and not actually going and creating the entire image. One of the kind of biggest features that's we're working on is, um, RBD mirroring. This is basically an asynchronous replication for RBD, um, for disaster recovery. So you have one site that's, um, has images that are being, um, whose rights are being mirrored to a second site, um, keeping the same structure of images here and here so that eventually you'll be able to fail over this site. When this site goes down, you can bring up this site and have exactly the same things running on top of it. This is a kind of a very large project, so we're leading some groundwork here it's possible that it'll be experimentally usable in Infernalis but it likely won't be fully usable until Juul. These are just the kind of RBD level features, you can find other talks about the generic stuff features that are also happening. Oh, now we want to talk a little bit about, um, how to best configure OpenStack to take advantage of stuff. So one of the things that we've recommended for a long time now is enabling, um, RBD caching. This is an in-memory cache on the client side which basically allows you to absorb little bursts of writes. In writeback mode, this means that it will tell the guest that it's completed the write once it's been to the cache but it will still reflect flushes. So it behaves like a well-behaved disk cache. The second option here RBD cache right through until flush. Make sure it's extra safe so it stays in write-through mode until it actually sees a flush from the guest because some guests don't send flushes or can use these things configured. And in that case, it wouldn't be safe to actually do write-back caching. So those two settings are the defaults in Hammer but if you're using an earlier version of stuff you probably want to enable that on your hypervisor. I think since Firefly now RBD has supported, um, parallelization of different kinds of management operations like deleting images. If you want that to go faster you can increase that from the default of 10 operations to 20. The next option is called an avansocket. This is something that you'll see commonly in Seth Demons where you can go and introspect what their state is and see what's going on with them. It can also be useful on hypervisor side to see what RBD volume is doing, how many apps it's doing. Also for debugging you can see which requests are currently in flight, what they're waiting for, if they're stuck, what's happening. And also for verifying configuration you can go in dynamically set configuration through this avansocket and you can also just list it out and see if your configuration is actually being applied as your expected is. Related to that you can of course set a log file so that you can enable and debug them there and see what's going on with your RBD volumes there. In terms of glance, for a RBD, most of the time to get the most use out of it you'll want to use raw images and expose the location of the back end there via the glances API. And that's the show image directory URL that lets glance expose the back end location so that Nova and Cinder can create copy and write clones of those images. And if you are using the copy and write clone feature you probably don't want glance to do any local caching since it's not actually transforming data anyway. So you can just disable that in the glances API.conf It's getting rid of the cache management vertical flavor. So as I mentioned, to take advantage of copy and write cloning with CIF you want to use raw images. Currently you have to convert them before you're uploading them but in Kilo there's basic support for actually doing this conversion in glance itself. These are some commands here you can use to convert them yourself if you're just downloading QQI2 images and want to make sure they're raw. If you've already have them in RBD you can convert them directly there and then tell glance about their location. But you do want to create a snapshot of that image and protect it so it can be used for cloning before you tell glance about it. One of the lesser used features of glance is metadata that you can associate with an image. With this metadata you can kind of tell NOVA properties about how this image should be used. You can, for example, specify the disk bus to use. So if you're using modern Linux guests you really want to be using the Verdeo SCSI bus. This is generally higher performing than the existing Verdeo block bus and it also allows disk card support if you want to enable that separately. So on the NOVA side there are a number of different options you probably want to set. The first listed here is for enabling disk card support. This is like trim for an SSD so you can, since RBD devices are thin provisioned you can reclaim space from them by running an FS trim inside of a virtual machine but in order for that to work QMIO has to know about now that RBD supports that and actually expose that through to the guest. This can kind of have some bad effects on performance if you're not careful. So it's probably a good idea if you enable this to be aware of that and probably throttle the IO QMIO swaddling. And one of the other reasons people like using RBD for NOVA in particular is that it enables live migration but since it is shared storage NOVA can't directly inject things into it very easily. So just disabling all the password injection, key injection or partition injection is generally a good idea. You can put those things into RBD through the NOVA's metadata service or through config drive. There are patches outstanding to actually enable config drive to be stored on RBD as well. And finally there's a bunch of live migration flags. I can't go through and then kind of explain what all these do but these are basically the ones that you want to use to enable live migration to work with shared storage like RBD. Oh, I might skip this catch mode. So in order to take advantage of the RBD local client cache QMIO has to be aware of it so that it does send patches through and it is safe. And so handling that is just a configuration option in NOVA where you can say that for network block devices like RBD we want to enable a cache in live back mode. Okay, so this is a really interesting use case where people often have a different set of hypervisors where they, at some point you want to build like a high performance hypervisors with local SSDs for example and then you want to use image type default. So basically when you boot an instance it's just a file on the file system or you could use LVM properly to get better performance. But something interesting is that you might also have less, not less important but less critical in terms of performance virtual machines that are running on set. So something interesting to do in that case is just to build zone aggregates and where you put all of your SSDs hypervisors and all of your hypervisors that are connected to SEF into different entities. And then you just expose both of them through availability zones that you pin into a sort of type of flavor. So while booting an instance you will either get an instance that will have a local disk, a super fast local SSD or that's going to be on SEF. At the signal level there is not much to configure except all of the flags are currently available but these are just simple configuration flags whereas in my SEF cluster which key should I use. So we don't need to put it here. The only thing to remember is that you must use a glance API too because when you're doing a boot from volume so create an image from volume create a volume from an image too you want to handle all this. Something that you want to do as well and as mentioned earlier SEF doesn't really support QoS at the moment so you really want to make heavy use of QoS throttling so you really want to create types, volume types where you associate QoS specifications and then you boot all of your instance like you create all of your volumes and you attach them to your instances. And this also works with boot from volume of course. A little bit about signal backup so they were talking yesterday from Neil about backups and disaster recovery in general so that has been mostly covered already but I just wanted to highlight that once again too. The current set of the implementation of signal backup states that the only valid use case currently is when you have a single open stack environment and two SEF clusters. So you have your active open stack environment with a local SEF cluster and then you have another SEF cluster on another location where you can simply create volumes then create backups and then you can restore backups. The only issue with that is with the two properties that were introduced in the last cycle which are signal import and export metadata. It basically allows you to get off the database fields and re-import volume properties into another open stack environment. So ideally what you would like to do is to have two open stack environments and then you regularly text backups and then export all the metadata from a given volume backup and then you want to re-import that into another open stack environment just to properly do UDR but that's something that just doesn't work at the moment. So we are currently working on things fixing that too. It's not much to do but it's just for you to know it's something that can't be achieved at the moment. One last point about guests and configuration. We highly recommend to always install the CUMUGIS agents because it provides many capabilities and what we really want to use in that case is the LibVirt FS-Freeze NFS API where you can just perform an FS-Freeze while performing a snapshot so you can ensure that at least you get a consistent snapshot but you can do this also with hooks that are implemented. So you just click on create a snapshot. The CUMUGIS agent receives that goal. It just freezes the file system or before that it executes the hook so it can do a database or something like this. Then stop the application, freeze the file system, once it's done you just revert that so you restart again the application and you're good. So you can ensure that at a certain point in time you have a consistent backup and you have been doing this all the way through all the layers. So from the application itself to the block device too. So that's why it's so important to properly configure this. This works with the Glance Metadata always require a glance and so you just put that on your image and this will get configured while you will be performing a snapshot if the CUMUGIS agent is installed. Josh and I have been putting a lot of efforts on documenting all of this. Obviously some of the bits that we discussed during that talk are not yet present on that document too but we will be pushing this pretty soon. So don't hesitate to test, contribute and because this is the community-based reference for configuring OpenStack environments. So we would like to thank you all for your kind attention and I guess we'll be happy to take some questions now. We should have seven minutes or something. So if you have any questions please go to the mic and ask your question. Hi, I have a question about the flattening process. Can you close it to the mic please? I've got a question about the flattening process when you use CopianRide and then your image got apparent. Is there any issue and good practices to use it? Is it safe? Flattening, RBD Flatten option. Oh, flattening? Yeah, so flattening is generally safe. The issue, the only difficulty right now is that if you have created snapshots of an image before flattening it the snapshots will still refer to the parent and so you can't delete the parent until the snapshots are deleted right now. So if you're an infernalist with D-flatten you will be able to flatten an image and all the snapshots at once. So you'll be able to break the dependency entirely without deleting snapshots. What about the performance let's say if I create many, many clones for one base image and I want to flatten them one by one for example? Do the other customers will notice that I'm flattening something? I mean the base image will decrease the performance? Yeah, so in general the base image will increase the performance because it won't have to do the CopianRide anymore. So any new objects that weren't written before will be there. But it is a heavy duty operation where it's reading from the entire previous image and copying all that data. So if you want to be careful about running that if you're doing a hundred of them at once you might want to throttle that and make sure it's only doing it for example a certain number of IOs at a time. And another question, what about caching and use? So caching is definitely getting there. I think it's making lots of improvements in terms of how IO can happen there. I think it's not quite sure that it's entirely ready yet and still on hammer. But more work is being done to make it more usable. Okay, thanks. Hey guys, thanks for the talk. It's always good to hear what's coming down the tubes. Just curious, Seth is an obvious example of this but there are other block storage solutions where you could have multiple instances of Cinder potentially fronting a block storage solution which could do multi-site or multi-cloud anyway. Would the Cinder export and Cinder import metadata capabilities you were talking about which aren't there yet but assuming they were there? Well they are. Do they work? I thought the idea was that they weren't working. They do work but the main issue is when you import the metadata into another open stack environment this is what the bug is but you can safely export everything but once you import it there are some missing fields. Right, so if I wanted to manage the process of handing off some block storage from one open stack instance to another would the export with an import with some idea of flicking an enable or disable bit and that process be the appropriate way to do that? And if so is that something you should think about maybe federate between Cinder implementations? Yeah, so in general I think today the easiest way to do that is with the Cinder backup service and with these export and import metadata APIs and you really want to look at Neil's talk that he gave yesterday on dude where's my volume? He went over a bunch of different scenarios where that and how to do that and what would be the best way to do that in the future. Okay, thank you very much. Hi, so I wonder about the live migration flags you've sort of presented the flags you use to enable live migration with set packets. I wonder why you are disabling the tunnel migration there because by default it's on and you are not presenting this flag. What is on, sorry? The tunnel migration flag, it's on by default but in your setup you've disabled it. Is there a reason for that? Turning off migration flags. You've presented some of live migration flags that there weren't a tunnel flag. So one flag is missing, that's what you're saying? Yes, and I wonder why. What flag is it and what does it do? Live migrate tunneled. And what does it do? Makes the migration go from Kimu to live build process then through the network to another live build process and then to another Kimu. So if you disable it, Kimu talks to each other directly. Okay, so. Yeah, I think there's no real reason to do it. I think it's just a newer option that we haven't considered yet. Hi, so I have a question about the object encryption. Do we support object encryption as of now? So object encryption. So right now stuff supports at rest encryption using DM crypts on the underlying storage devices. In the future we might want to actually do that at a higher level and support it on the wire encryption as well but right now it's not there. So we're looking at doing that. Road map for that. Do we have a road map for that one? Is that down the line up here or something? Um, I'm not exactly sure where we're just on the road map. Okay, fine, thanks. I have one more question. The kilometer safe integration, right? It is currently using the polling but it is not scalable for larger clusters like 50,000 or something like that. Is there is a plan for changing the things? No, as far as I know. Yeah, in terms of the kilometer, the RATOS gateway I think right now does not have any kind of notification API but you can kind of realize on things looking at its logs. Potentially you could have some kind of integration using the RATOS and actually using its watch notify primitives to do that kind of thing. Although I'm not aware of anyone planning to do that right now. Okay, fine. Yeah, but a main issue with that is that you have to reimplement all the logic. So if you do directly with this at the LibRATOS level, then you have to do way more work than just getting what LibRATOS gateway exposes to you. Currently polling is okay for few nodes but if you go for larger setup, it will be okay.