 Hi everyone and welcome to the last Cloud Tech Tuesday. I am Amy Marish. I am a Principal Technical Marketing Manager here at Red Hat, and I am the OpenStack community person. Today, we have Jerry Stronsky, who is part of our OpenStack development team, and he's here to talk about OS Migrate. Jerry, do you want to introduce yourself more thoroughly? Well, I can. I'm a Principal Software Engineer. I've worked on OpenStacks since, I think, 2013 or so. From that, the last two years or so, I have been on the OS Migrate project that I'm going to talk about today. Great, and welcome to the show. Do you have a presentation to start with today? Yes, and then a first presentation and a demo, so I'll share my screen. Can you see my screen? No. There we go. OK, cool. So yeah, today I want to talk about OS Migrate, which is an OpenStack Parallel Cloud migration toolbox. First, I'm going to say something about Parallel Cloud migration, what it is, and then specifically about the OS Migrate project. And we'll talk about the parts or sort of use cases for OS Migrate, which are pre-workload migration and workload migration, and then I'm going to show a moment. So first about the Parallel Cloud migration. So when we say that term, what do we mean? Most of the time, we mean an alternative approach to upgrading OpenStacks. So it's not an upgrade of the cloud in place, but instead of upgrading cloud in place, you would deploy a new cloud and copy or move content from the old cloud to the new cloud. The biggest difference there is with the migration you're dealing with the tenant content most of the time. So the whole problem is scoped to tenants. And with upgrade, you're affecting the whole cloud at once. So it can be faster, but it also brings more sort of widespread risk with the whole procedure. And the other sort of less used use case for Parallel Cloud migration is just moving to a different cloud without upgrading so that it could be moving between providers or moving just from staging to production or from what we've seen from a cloud where a mistake has been made when deploying it to a cloud which was configured properly. So why would one do the Parallel Cloud migration instead of upgrade? So one of the causes or one of the triggers for Parallel Cloud migration as opposed to upgrade is your hardware that was used to deploy the old cloud is now obsolete. You're buying new. And you could, in theory, replace the hardware within the existing cloud gradually, but it's a lot of work. And oftentimes, people just want to deploy a new cloud on the green field. And then deal with the content. The other use case could be making fundamental config changes to the cloud, which are not possible to do after the cloud is deployed, things like or either not possible or very hard to do, things like switching an SDN provider, for example. Or as I've already mentioned, environments where the cloud-wide risk or cloud-wide downtime for upgrading is sort of unacceptable for some reason, they would probably prefer doing a Parallel Cloud migration as well. Or if it's just a sort of more widespread public provider that wants to give their users a choice when they start using a new platform, new version of their platform, and give them some time span for migrating, then there would be a Parallel Cloud migration case as well or any combination of the factors that I've mentioned. So from a hardware perspective, Parallel Cloud migration with the Red OpenStack platform looks like this. On the left side, we have the old cloud, which has a director node. So that's the node which manages the OpenStack deployment. Then we have three controllers and some resource nodes, so compute nodes. And we deploy a new cloud. So we start with a new director and a new set of three controllers. And then either the old hardware is being decommissioned. So we just install the new hardware in the new cloud. Or if we want to reuse the hardware, we can start moving gradually the compute nodes from the old cloud to the new cloud. And so this is the hardware perspective. But what's missing here is the content perspective. When we say we move compute nodes from old cloud to new cloud, we still have the content of the cloud to deal with the actual workloads, because they're not going to be moved by just moving compute nodes. So the other content perspective is what OS Migrate focuses on. Or the other way to look at it as well is OS Migrate, in this context, is not push button solution to do your OpenStack migration. You will still have to deal with this sort of scale down, scale up issue with director if you're doing the parallel cloud migration. But with director, you will not be able to move your workloads. So that's where OS Migrate comes in. So what is OS Migrate? It is a content migration toolbox for OpenStack. So you're moving your content that you've deployed onto OpenStack to another OpenStack cloud. It's a project that I'm working on with my teammates, Carlos and Phil. And so we have pre-workload migration there by which we mean things that can be migrated ahead of time before affecting the source workloads. So these are things like virtual networks, subnets, routers, security groups, et cetera. And then we also have workload migration, which is the actual move of VMs and their data. We test this project end to end with Red Hat OS OpenStack platform. Basically, we mimic the fast forward upgrade test cases there. So we test move from OSB 13 to OSB 16, which would be a single fast forward upgrade. And we also test a direct move from OSB 10 to OSB 16, which would be the equivalent of two consecutive fast forward upgrades. The project is hosted on GitHub. We have docs being built into GitHub pages. And we're releasing to Ansible Galaxy from where it can be installed. Jerry, quick question. Is this tested also on upstream? So let's see, 13 would be Queens2Train, which is 16? Or is this only your Red Hat OpenStack platform? So the end to end tests are running only on Red Hat or OpenStack platform, but we do have also functional tests, as we call them, which are a little bit simpler, because we run them in GitHub. We run them using GitHub Actions. And there we have a limited space or basically limited means of what we can do. So we test there with DevStack, so the upstream DevStack, the latest upstream DevStack. And we do not test the workload migration, though, just because of the resource limitations in the GitHub Actions CI. But we do still test all the API aspects of the migration. OK, great. Yeah, so I would say the project, in terms of how it works, should not be limited to Red Hat OS OpenStack platform. We just don't test it end to end with upstream, but I would say it should work. There's no reason not to. Which is, yeah, that's going to be my first point in the highlights here. So we use the standard OpenStack APIs exclusively. So what that mainly means is you don't have to install any plugins to the OpenStack clouds. OS Migrate just talks to the OpenStack APIs, or you don't even have to give OS Migrate access to your OpenStack nodes. So it doesn't, for example, read the MariaDB database directly, it just goes through the APIs. Well, that also means, and what we focus on, what we test as well, is that OS Migrate is runnable as tenant, like without admin privileges to the cloud, at least as far as we're talking about tenant resources. So if you wanted to migrate, for example, a public glance image, yes, then you need to be an admin. But if you want to migrate your private image in your project and your private VMs, then you don't need admin privileges to run OS Migrate and migrate content with OS Migrate. We try to be as transparent as possible. For example, the way we export the resources are into editable emails. So if you want to edit some aspect of the resource before you're importing it into the destination cloud, you can do it in the YAML. We will see that later in the demo. The tool is stateless, so it has no DB. The YAMLs are the only source of truth for how the resources are going to be migrated and created in the destination cloud. The tool is idempotent, meaning if you had, for example, exported 10 resources and fifth resource would fail on import for whatever reason, then you don't have to worry about deleting all the already imported resources if you want to retry after fixing the root cost. You would just retry with the same file and it would see what resources have been imported and it wouldn't create them twice. It is what I call the cherry pickle, but basically it means modular. When you're running the migration, you don't migrate the whole cloud at once. As already mentioned, you go by tenants and you go by resource types and you can further a scope by the names of the resources so you don't even have to export all of the content in your project. You can pick what you want to export and import as well. Quick question. Because you can cherry pick and do one tenant at a time, say you wanted to migrate to a couple different new clouds, could you send one tenant to one cloud, one to another, or did they all have to go the same location? You could send the different tenants to different clouds. You would just do that by specifying different authentication parameters via Ansible variables. So you would just point to a different cloud API and it would, OS Migrate would talk to that destination instead. But so we, like within a single playbook run of OS Migrate, you wouldn't be importing multiple tenants anyway. So we typically do like one playbook run for one tenant, for the tenant's resource type, which we'll see in the demo as well. There are some admin resources where it would be operating across tenants or as an admin, but for example, users and projects, if you're importing those, those target a cloud rather than a tenant, right? But yeah, so it would be possible. And yeah, my last point here is it wasn't the first choice, but eventually we settled on making OS Migrate an Ansible collection. And in hindsight, I think that was a nice choice. That was a good choice because we have a lot of people in the industry and around OpenStack who do know Ansible. So people who already do know Ansible, they come to OS Migrate and they basically feel familiar with it already, like from day one. And the other point there is if you do not know Ansible yet, when you're learning OS Migrate, you're essentially learning how to use Ansible, which is a transferable skill to other use cases. So I think that was a nice choice in the end. And so that was really nice about OS Migrate. Now there are some caveats, right? With the approach that we chose. So going through the APIs rather than going to databases directly means when we're creating the resources in destination, they're gonna get auto-generated UUIDs which are gonna be different than in the source. So it's not going to be like a one-to-one mirror copy in every aspect and the most important part there. Well, like we try to make it a mirror copy wherever we can, but for the UUIDs, for example, we cannot. So if this is important for some, for example, SDN provider, drivers, et cetera, the UUIDs of those resources, then this needs to be sort of kept an eye on and checked whether things are gonna work out or not in a particular setting. The other thing there, which ties to the status and idempotency is that we use name-based references as because we don't keep the DB, we don't keep the mappings from source UUIDs to destination UUIDs, which makes a lot of things easier, like the partial migrations, the edits, et cetera. It makes it easy to, for example, even change a reference to a resource and you don't have to deal with UUIDs. But what it means also is that resources types, resources must have a unique name within a project, within a resource type. So for example, you can have a VM, which is named the same as a network, but you cannot have two VMs of the same name or they would export fine with OS Megan, but when you try to import them, OS Megan would validate the file and tell you you cannot have two resources that are named the same. So that's a limitation. And the last thing I want to mention here when talking about OS Megan in general is migration versus recreation. So if you've already decided that an in-place upgrade is not for you, you're going for migration, well, it still makes sense to consider just recreating resources from scratch sometimes. And that is, for example, when you have stateless VMs, things like CI clouds or some scientific computational clouds or graphics render farms. Those are typically stateless workers which get a job from somewhere and they post a result somewhere after computing something. But they themselves do not have any important internal state. And so these are just not worth migrating, right? Especially if you have some automation for creating them, which in terms in cases of these workers which scale out or scale down typically, they are typically automated. So you just don't need to migrate those and you can just recreate them in a different cloud. Or a second case, if someone is a very diligent with their automation, maybe they haven't been creating their networks and subnets and stuff like this manually by either CLI or UI, but they perhaps have an Ansible Playbook that creates those. And so if they already have this, then again, there might not be that much sense in trying to migrate if you have the automation for recreating this very easily, maybe you can just recreate it. So, and OS Migrate is good for these use cases where you have automation for something, you don't have automation for other things because again, it's not a push button solution, it's a toolbox which is composed of many playbooks. So you can really pick what you need from OS Migrate. So let's look at the pre-workload migration first. That is the really simple sort of low-hanging fruit migration because you do not need to stop anything in the source cloud for this migration to work. So we're talking about resources like networks, subnets, routers, security groups, glance images and Nova key pairs if you're migrating as a tenant. And these resources, when you want to copy them to a destination cloud, you just basically query the API, how do they look in the source cloud, save the parameters to a YAML file and then we can recreate them in the other cloud. There is nothing that needs to be stopped or somehow altered in the source cloud. So this is a very easy migration usually. So as a tenant, you would be able to migrate those resources that I just listed here. And as an admin, we have some additional ones that you can migrate. So there would be users, projects, Nova flavors. So these can only be migrated as admin. We also have Quota migration, which is work in progress right now. Phil is working on that. And also the key pairs, they're listed twice in tenant and in admin because they're a little bit special. So in OpenStack, most things are owned by projects and then you have users operating on those projects, but key pairs, they are actually owned by users. And if you do not have administrative privileges to a cloud and you want to migrate key pairs, that means each user would have to migrate their own key pairs, which yeah, if you don't have the administrative privileges, that's your only choice. But if you do have administrative privileges, then we support a bulk migration of key pairs for only half of other users. So if you would be say exporting and importing user accounts from one cloud, importing them to another, you can do the same with key pairs as well with OS Migrate. So Jerry, quick question. So as an admin, I would probably want to do my migration early just to make sure everything's up and running and available. If like right before the tenants are ready to do their migration or the admin is going to migrate those tenants for them, could they run the pre workload migration again to catch anybody who might have been added or doing changes that were made in the meantime? Yeah, yeah, you should be able to do to the item potency and the name-based references. So we would see the user of this name or the project of this name already exists in the destination, so we're not gonna import it and only the new ones would get imported. So, and yeah, as an admin, you can still migrate those resources that were listed as tenant resources. For example, glance images, you can have private, you can have public images. So you would migrate private images as a tenant, but you would migrate public images as an admin using the same playbook, basically. And this is a diagram of how things work. Again, it's very simple. So on the left side, there's the data path and it just goes from the source cloud onto the migrator host, which runs Ansible. There you have the serialized resources in the form of YAML files, which you can optionally edit. And then that gets imported into the destination cloud. So for the pre-workload migration, this is very simple. In terms of the workflow there, it's all Ansible playbooks. So you would first, for example, export networks around a playbook that are export networks, then run a playbook which exports subnets, then you can look at those files, edit them if you want. You could explicitly validate them, but if not, then the validation runs again on any import again. And then you would run a playbook to import networks and another playbook to import subnets. And that way you would migrate your networks and subnets from one cloud to another. And then, of course, there's more resources as listed previously. And the- How long are you migrating your networks? If you needed to change IPs, you could just update the YAML files? So we try to converge where possible. There are some places where this is not possible. I'm not sure now whether networks and subnets with their IPs would be the case. I think maybe even OpenStack could probably block you from editing that. I'm not sure, I'm not sure now. So yeah, generally we try to do, we try to support updates where possible. So we do have update tests in our functional tests used, like we export the resource, we import it, then we do some edit on the file and then we import it again and we test that the thing, that some parameter of a resource got changed in the destination rather than a new resource was created. So for some resources and some of their parameters, this is possible, but there are just cases where OpenStack will even prevent you from editing something and it will tell you like, yeah, if you want to change this, you have to delete this resource and create a new one. So it works sometimes. I'm not sure whether it would work with, let's say subnet IP ranges or something like that. Okay, that makes sense. Cool, so workload migration. That is, so when we talk about workload migration, we mean migrating VMs and their cinder volumes which are attached onto those VMs and creation of floating IPs for those VMs. This means the source workload is affected. It must be stopped when we are copying the data and internally within OS Migrate, it's a multi-stage process. My point here is it's much more complex internally than the pre-workload migration. Storage-wise, it's a cold migration. So the whole disk is copied from a stopped state of the VM we do file system sparsification where this is supported, like when the four file systems where the first sparsified tools can recognize them and sparsify them. What this means in practice is when you have, for example, a 300 gigabyte VM and you want to migrate it and it only has say 50 gigabytes of used disk. When you think about how long will the migration take, what is the most important factor is how much used space on the disk there is. So it's the 50 gigabytes that are going to define how long the migration takes and the 250 gigabytes which are empty space are going to be copied very fast, essentially skipped from copying. So, yeah, sparsification is nice when migrating. And we support two modes with regards to the OpenStack Nova boot disk. When we have a migration parameter boot disk copy set to false, we, the destination instance is created from an image and only the attached sender volumes are copied. So this is nice when the boot disk, you know that it doesn't contain any important data that you want to preserve during the migration. So for completely stateless workloads or semi-stateless workloads where they only keep the state in the sender volumes, this is the migration mode to choose. And then there's boot disk copy through where, aside from copying the sender volumes that are attached, we also do a snapshot of the boot disk and we copy the boot disk over to the new cloud and create the destination instance as a boot from volume. So even whatever was written onto the boot drive in the source instance will get preserved. And so this is something that can't be auto-detected. It is the operator or the user of OS Migrate who needs to know whether the instances have something valuable on their boot disk or not. So this is something that has to be set by the user. We default to the boot disk copy false for instances which are created from a glance image and boot disk copy true for instances that are already created in the source cloud as boot from volume instances. Because for those, the boot disk copy is the only way we can migrate them. And networking-wise, we do preserve the fixed IPs. We do not preserve MAC addresses for ports. The optional MAC preservation is on roadmap but the reason why we do not do this by default is we basically cannot do this via Nova API. We would have to pre-create the ports with Neutron API. And this works nicely when we're creating the instances but when such instances with pre-created ports would get deleted, their ports would not get deleted. And this is a very confusing behavior for users. For example, when you then want to clean up the subnet, you will get errors that there are leftover ports on the subnet and the subnet cannot be deleted. And it's just a confusing situation for the users. So we just default to preserving fixing IPs and not preserving MACs. And we want to implement the other way as well but it's going to come with this caveat. With regards to floating IPs, we have options basically of auto-creating them where the IP address itself would be selected by the destination cloud also selected. Usually this is the only thing that makes sense because generally the source and destination clouds operate on different public IP ranges. Then the other option is use of pre-existing IPs, meaning when you have a floating IP which is already created in the destination project and not assigned to any server yet, you can edit the YAML of the exported workload and write the IP that you want to use there. And then when you import, it's going to use that IP. So this is the default behavior where we first try whether that IP address that is specified in the YAML file already exists in the destination cloud and we can use it in which case we do. And if we cannot do that, we auto-create the floating IP with the IP address being selected by the cloud. So this fallback behavior is the default and you can also edit the parameter to force like a use pre-existing IPs or if it's not possible then fail or do not attempt to use the pre-existing IPs and always create new auto-created IPs. And the last option that you can choose is just keep creating a floating IPs altogether and perhaps if you have some custom solution for them that comes later after the instance has been created by OS Migrate, this might come in useful. And the same diagram for workload migration, that's a little bit more complex because so from the dataflow point of view on the left side, we only serialize metadata on the migrator host into the YAMLs, not the binary data. And so you can edit the metadata anyway you like. For example, those floating IPs or even fixed IPs, you can edit those. And then when you're happy with that you're gonna trigger the migration and the data is going to flow directly from the source to the destination cloud via hosts which we call conversion hosts which deal with the data copying. And they basically facilitate the direct path between the clouds for the data. So the data is copied this way through the conversion hosts and then once the copy is done then the VM is created in the destination cloud. From the workflow perspective on the right side before you start exporting the instances you need to do one more thing and that is to deploy the conversion hosts to both the source and the destination cloud. So first you run that playbook to deploy the conversion hosts then you would run the playbook to export the workflow metadata from the source. Then you can edit it if you want, validate and then you run the import and after everything is done importing you can run a playbook again to remove the conversion hosts from the source and the destination. So the internal procedure for the workload migration would be we stop the instance optionally either always migrate stops it or it already expects it to be stopped by the user. Then if we're using the boot disk copy through mode then we snapshot the instance and convert the snapshot to a center volume attach all volumes to the source conversion host either a snapshot volume or any extra attached volumes that were attached to the source instance. We create the matching size volumes in the destination and attach them to the destination conversion host. Then we do the file systems classification and we start copying the data from the source volumes to the destination volumes via those conversion hosts. And once that's finished, we create the VM and we either create or attach existing floating IPs. And that is it. And I have a demo video. Can you see a full screen of a terminal? Yes. Cool. So here's how you install voice migrate using the Ansible Galaxy Collection install command. We could provide a particular version via like a colon and a version at the end of the command. But if we don't, it's gonna install the latest or at least what was the latest when I recorded the demo. It's gonna pull dependencies. Most importantly, we depend on the official OpenStack.cloud Ansible Collection which is developed as part of the OpenStack project. And here are parameters which we feed to OS migrate. So at the top, we have source and destination authentication parameters. Then we have in the second block there, there's a data directory which is where the YAMLs are going to be stored with this resource exports. And then we see a bunch of filter variables which are, this is the mechanism which allows you to filter for a subset of resources. Not all resources owned by the tenant that you're authenticated as, but a subset. So in this case, we export all the resources that start with the string OSM underscore. And then the last parameter in that block tells OS migrate that it is allowed to stop VMs that it wants to migrate. And the last block is these are parameters for the conversion host. So we use sentos. We will want to attach the conversion host to the external network name public. Yeah, we need to provide the DNS services as well there. I've redacted all the sort of at least a little bit sensitive data from the files. So the DNS service is not shown. We set the flavor that we want to use for the conversion host and the MTU for the networks for the conversion host. This was recorded from a virtualized environment. So the MTUs are a little bit lower than you would normally have them. We export this sort of just a utility variable which points to the path to the Ansible collection. It's going to be using one, there's going to be useful when formulating the commands for Ansible playbook. And here we just list that we have an OSM underscore net in the source project. And we don't have that in the destination project. This demo is recorded within a single cloud. So you don't have to migrate between clouds. You can also migrate between projects in the same cloud if that's useful. So I only had one cloud at my disposal. So I migrate from project to project. Here we, so sorry, at the top we run the Ansible playbook command. We give it inventory file which says use the local host as your migrator machine where the YAML files are going to be stored. Then we provide the parameter file that we saw at the very beginning of the demo and we give it the playbook to export networks. And it's going to create a file networks.yaml in the data directory. And this is how our data format looks like. So we have what version of OSM was used to create this. We currently do not support compatibility of data files between versions. Most of, in most of the cases it would work but it's just not something that we guarantee. So we do check that the data files were created with the same version of OSM when we are importing. And then there's the resources section where there's just one resource here but there could be many. So the info section, the underscore info these are parameters that cannot be translated to the destination cloud. So it's just, but they could be used for debugging for example, if you want to know the ID of the resource that you exported, it's here but it's not going to be fed as a parameter to the destination, to the creation of the network and the destination cloud. These underscore migration parameters is something that is used only present, look at it later. And the parents are the important thing here which is these are all the parameters which do get translated from the source cloud to the destination cloud. So it's things like description, you can see MTU here is defined and the name of the network. There's going to be more complex things later but so far the networks are a very easy resource so we just export them like this and import them. And then we will see the OSM net has been imported into the destination. Now we can see there are no subnets in the destination and there are subnets, the source there is a subnet. So we can export it again. So it's scanning all the subnets that you can find and it's applying the filter with the regular expression and creating another file of subnets.yaml and here it is. Again, this is the info cannot get translated but here are the important parameters. So it's gonna translate things like your allocation pools, the cider of the network and all of these things, even custom host routes. And the interesting thing is here, this is the referencing mechanism which I talked about where we use the names to reference things. So this is saying we're gonna attach this subnet that we're creating this OSM subnet to a network called OSM net which is in the project and in the domain that we are authenticating as in the destination. So it's basically saying use the OSM underscore net from the same project that where you are creating OSM underscore subnet. So when we are on the import playbook now it's gonna create the subnet and it's going to be attached to that OSM underscore net in destination. And routers, we're gonna do the same sort of export import procedure there but they work a little bit differently in the sense that they are split into two files. There are routers separate from router interfaces just because router is a relatively complex resource and you may want to separately control how you create the router and what products you attach it to. So here there are routers and router interfaces.yaml so this is a routers.yaml and it just creates a router of name OSM router on a network of a public name. And this is another, so we can see another thing here it doesn't have the percent auth values here and it's saying attach the router the router network reference here points to a public network of name public but it does not have to be in the project that we're authenticating as which means this sort of reference allows referencing public resources like public images and public networks. And here the interfaces and now we do have percent auth reference meaning we do look for OSM router only in the project that we're importing to if there was something like a public router of this name we wouldn't be using it. So this is a device reference meaning we're gonna be creating interfaces on this router and we're gonna be adding an interface with this IP address this private IP address on this subnet and this network again all the referencing at work here name-based referencing. So we run the two playbooks one for importing routers and another for importing those router interfaces and then we're gonna list or we're gonna look at the routers on either side it's best printed in a YAML it's then easier to read and we can compare how similar or different they are. So here we first see the router exists and now we can print it and you can see things like they have to have a different external address so this is sort of like a floating IP for a router I would say so this is a publicly reachable address and we're in the same cloud and these are two different routers one is in the source project one is in the destination project so they do have to have a different IP otherwise they would collide so this is also selected by the destination cloud but when we look at the private networking part we can already see that it has exactly the same private IP in source and destination so if we attach any VMs to this subnet that the router is the default gateway for they can depend on having this IP the same in the source and in the destination and a similar thing for security groups they are also split into two files security groups and security group rules and the reasoning there is in those security groups by themselves are pretty simple the rules can actually reference other security groups and there could be even circular references so if we imported the rules together with security groups there could be unsatisfied references at the time of their creation so the only way to solve this is to create empty security groups first and then create the rules for them which may or may not reference other security groups so that's why they are exported and imported in two stages but it's still a fairly simple process I would say if people test this out in like a testing environment I would expect them to have some even batch scripts written around the playbook execution just to make it even simpler to migrate this and so they wouldn't have to around that many commands to migrate and we can see the security group was migrated correctly and even if we list the rules they are there as well and now the last part for the migration is the workload migration there are no servers in the destination and in the source on the left side there is one server and that server has a one gigabyte volume attached to it and that server was booted from an image which is also important for what we'll see later so the first step here we see we run the deploy conversion hosts playbook so we're deploying those conversion hosts that will facilitate the data copying at this point I should say this was recorded with Askinema originally and it's kept at like there's a maximum waiting time of three seconds between something else happens on the screen so this is actually a little bit like in effect this means the recording of the deployment of the conversion hosts and of the migration later is sped up in reality some of those things that happen on the screen could be taking longer like for example we will see later it's going to be updating the packages and it's going to be installing new packages into the conversion hosts and those are steps which do take a little bit longer in reality so here they are basically kept to three seconds so it's created instances in both source and destination project and now it is linking the conversion hosts together basically making sure that the destination conversion hosts can SSH into the source conversion hosts we copy the data via NBD over SSH so network block devices which are secured by being communicated within an SSH tunnel and so it's installing because we're using sentos it's installing Apple into both of those conversion hosts and updating the packages and installing a few packages like the word sparsify tool and libvert which is required for the sparsify tool so here the conversion hosts were deployed we can look, there's a new OS Migrate Conf SRC server in the source and there's a OS Migrate Conf DST in destination and now we can actually export the workloads or we could even export the workloads without the conversion hosts being present but we wouldn't be able to migrate them so this again created a new file or close.yaml and that one is longer, it contains more data than the pre-workload migration resources so this is where we see the underscore migration parents section first time in use we see the Buddhist copy parameter which controls whether we will be snapshotting the instance or whether we will be booting it from a fresh image in the destination for the cases where we do the snapshot we have here, for example, availability to control the cinder availability zone where the boot from volume instance will go into and we have the floating IP modes parameter as I mentioned, switching between the skipping and using pre-existing floating IPs or creating new ones or the auto mode which first tries to use existing ones and then falls back to create fresh new IPs and floating IP references and from this is a section where you could write a pre-existing floating IP in your destination project and it would get used if it already exists there and isn't attached to an instance yet and if this is not possible, then a new one will be created but this is actually important as well because it tells us in case the VM would have multiple ports this information tells us on which port the floating IP should be created so we could migrate VMs with multiple ports and multiple floating IPs and in the destination, the floating IPs would get created on the correct expected ports, essentially yeah, a reference to an image although it's not going to be used because we will be copying the instance from like with a snapshot and I'm using a boot from volume process and here are ports and here is the one port defined where with the IP which is matching what we saw in the floating IP section so the floating IP will be created on this port and this port will be attached to OSM subnet and which is within the OSM net so again, this is all linking to the resources that we have migrated in the previous parts of the demo and the security group is here as well so that's the file and we trigger the workload sorry, yeah, here we see the boot disk copy is by default false because the instance was created from an image but let's say we did actually write something important for us onto the boot disk there so we're gonna edit the workloads file so that the boot disk copy is set to true and then when we run the workload migration we will see what that's gonna do it's gonna do the snapshot and a boot from volume instance so the import playbook it's discovering the conversion hosts and then it's going to, yeah, it's now verifying that it can connect to each of those and it's now doing preliminary tasks like setting up the conversion hosts for the migration, reserving ports and stuff like that and it is now attaching those volumes in source and exposing them for the destination conversion house to be able to copy them now it's in the transfer task, it's copying them again, this is sped up with regards to how it would run in reality due to the Askinema recording and so once the volumes are copied then it creates the destination instance using all the data from the YAML files and after that it's done, it cleans up in the source conversion host so now we can see the server has been created and we will, so here it references the key and the security group that we imported previously and here's the interesting thing the volume aspect of it, here we see this five gigabyte volume which is like this was in the source cloud this doesn't exist because the source instance was booted from a glance image but we did take the snapshot and so the destination instance is booted from this volume and it does have also the one gigabyte extra attached volume created and copied from the source and here we see in the source there's only the one gigabyte volume because the source instance was not created from a volume but from an image and the last thing after we migrated like this we just delete the conversion host so it's going to, the player was going to look them up and then remove them via OpenStack API again and that is it, we will see the conversion host will disappear from the final lists move a little bit forward, yeah so it's deleting also the supporting resources for the conversion hosts they all have their security group, their key pair their router subnet and network so that they can live somewhere within the cloud and yeah, so here we see in the source cloud the OSM server is there the conversion hosts aren't on either side the OSM server in the source cloud is stopped and the OSM server in the destination cloud is running so that is where the instance is running now and that is it, that's all for the demo Thank you so much, Derek and that's it for the presentation as well I can actually see where migrating within the same cloud could be helpful for certain projects where you could do that so that's actually really good to know that it can do it within and then I was thinking about the 10 to the 16 migration so it actually went from a pack stack installed system to a triple O based installed system so you don't necessarily have to have your cloud installed on this by the same deployment project which is really good to know Yes, yes, exactly Cool Well thank you so much Kerry for this this was really informative Well thank you Thank you for inviting me You're welcome and as I mentioned at the start of the show this is our very last cloud tech Tuesday Thank you all for who have been attending the shows I know we've been a little sporadic and we do apologize for that So thank you everyone and thank you, Jerry Take care Thank you