 Hello guys, thanks for joining and Amy's Alessandro Pilotti We're gonna talk about migrating workloads from VMware to Opestek. Okay, so it's actually I'm We are explaining a few concepts about Migrations and the best strategy is let's say to move on Contact around between different clouds and we will introduce a project called Coriolis that we've been working on and it's actually aimed exactly at that Okay How many of you guys are running VMware good, how many of you are running Opestek? Okay, this is obviously a rhetorical question, but So Okay, so let's set the context first In in our domain we are constantly moving workloads from one let's say technological generation to the other now So a few years ago we were moving from physical server to virtual machines right and that was like doing that during the 2000s Then the next up your step was to move from virtual machines to something that was able That was allowing us to define in a software defined way. Let's say that our resources now So we moved from virtual machines to Infrastructure as a service which is where we are today now from there we moved to container and if you if you look around about the marketing that lots of companies are doing today including for example Microsoft There is of course also a new option, which is the so-called serverless now Well, then in which you just develop applications without thinking about servers and so The the party which we are interested in particular at the moment And where most of the companies out there are is that virtual machines to infrastructure as a service So we moved to a traditional virtualization Like VM where a system center and son to software defined everything you know where of course Open stack is the undisputed king Or he could be of course a public cloud like again Azure or AWS or Google cloud or Oracle and son In the moment you need to move from one generation or the other there is always a cost associated Okay, but if you do the things right you also improving the TCO of course There is no particular reason to move from generation to the other if the if you don't improve your your investments right your total cost of ownership so Let's talk about why do we do it no As I was mentioning before in general what we're trying to achieve is to improve our total cost of ownership We might want to do it because we have a new on-premise cloud infrastructure So we have some old servers. We have a bunch of new servers then We might want to move our workload from one to the other no We have a Public cloud so we might just want to move our stuff from from on-premise to public cloud So that's actually happening pretty often right or we might want to move public clouds to on-prem There was also a ton of reasons to do that Or we might want to do our redeployment on-prem For example, you might have an old open stack cloud old quote-unquote, of course Like for example open stack kilo and you might want to move to Newton So there are of course always easy ways to move easy quote-unquote again To move from one version of open stack to the next one. Let's say Newton to Ocata for example But if you start to have a gap between the open stack version it becomes more and more complicated, okay, so Lots of our customers. I would say even the majority Deploy open stack and they simply remain stuck to that version until one day they decide well, it's too old I need to move to the new version because I need the new features or whatever else so and they come to us and they say well, can you help us to move from? From ice house from kilo and whatever to to Newton to Ocata and whatever the next one in the block is, okay So there is really no direct way to do on in place migration. So one of the easy options to just to have to To parallel deployments one with the old one what in the new one and simply move the stuff one one to the other, okay? Let's talk about a little bit which are the options, okay, I took this diagram from Stephen Orwell from From from Amazon to give credit because I think it's very very well done So there are in the moment in which you decide to move your workloads from one cloud to another There is not just one option. There are actually a lot of them The first one I'm talking about is just called the re architect So, you know Usually in the open-stack context when people ask Hey, how can I move my my application to the cloud, you know, and the typical answer is to rewrite it Yes, which is a very simple answer very clear because people say well, you know, if you move to the cloud you cannot rely on your You cannot rely any more on the host for high availability now because Lots of people using VMware or system center hyper V and everything are typically relying on for example V motion and similar technologies for For the high availability of their workloads now if one of the host goes down your workload keep on running somewhere else Okay, it doesn't mean that this is not available in in open-stack. For example the hyper V driver does it, okay? but I Have to see it as a disclaimer that we ride a hyper V driver, so But anyway But generally speaking the public clouds and in general the cloud design is such in which you don't rely on the underlying host You have the entire high availability Concentrated in the application layer. So in what actually you write, you know, so for example, if you might have a Web application you might have a lot of different instances that are simply load-balanced if one of them dies The other one will just carry on with the work No, so if the underlying KVM node, which is currently hosting one of those will die No problem. The others will be hosted somewhere else and work So as soon as you have as long as you have your affinity rules properly set everything will just work so Beautiful rewriting your application is definitely the best thing to do But if you spend the previous 20 years writing your line of business applications, probably it won't happen overnight It might take you another maybe not 20 years, but five years So there is a in our experience at least that our customers there are a ton of applications that people don't even know They're written people left and you ought to come, you know, and you know nowadays we have all these very nice continuous integration and testing suits we have all this, you know Agile methodologies and everything okay, but five years ago not so many people necessarily had them So you end up with a lot of spaghetti code, which is difficult to maintain Difficult to port and definitely difficult to rewrite because you have first to understand how that logic worked before moving on no So what most probably you will do is to write your new applications with Microservices pass layers and whatever else now the old ones Not so much So I don't say that rewriting is not a good idea Actually, it's the best idea you can have but it's not definitely the most feasible one in most cases It's expensive is time-consuming and don't forget or so one thing if you rewrite your application is not only impacting on the developers It's impacting also on the users So I don't know you guys but in our experience Sometimes you just move one button from here to here and you have users complaining So think about rewriting completely everything and have to retrain retrain everybody to do that So definitely not something which is particularly easy Um, the purchase is the second option If you have something that you wrote in house and there's somebody else that wrote an assas service That does exactly what your software was doing. Well, that's another perfect good idea You don't have to rewrite it. You just buy a different software possibly as a service now Say it's for whatever else, you know Perfect ideas. Unfortunately, this works only in the cases in which well You have a pretty generic type of application because nobody's going to do a SAS service only for your specific needs Otherwise, why should I do it? No, but as long as it's something very common like, I don't know a billing application I don't know accounting application Something that you know can unear be all those things definitely have lots of They say SAS options that it's very well worth doing it The problem there is if I move to the cons side is that of course there whenever you need a customization It's very difficult to fit it or very expensive, you know, because you have to ask of course the makers of those applications to do it for you Migrating data can be expensive and time-consuming because you have Gasilios of gigabytes of data that have to be moved to a completely different platform And of course again, there is a learning core for your users The next option is one of the obvious ones You cannot imagine how many times it happens that you go to to to a customer and You find racks and racks full of servers with stuff which runs since ages and nobody has an idea what's inside there And everybody's simply too scared to decommission it because they simply lost track about what's happening there Okay, so the best thing that you could do is probably to do and how do you think discovering what actually you need and whatnot and And take a decision simply shut down what's not needed. Okay Biggest advantage if you stop paying for keeping that stuff running now because whether if it's on premise or not There's always a cost electricity maintenance and everything else no And if you move into a public cloud, well, that would be a significant cost that you are simply sparing Another obvious one is retain Not everything is meant to be migrated. There are things that are worked to stay where they are. Okay For example, if you have applications, which are not self-contained But they have a tight integration with the underlying cloud or or virtual infrastructure So if your application is talking directly to the VM or APIs Well, you cannot just take that application and move it to opposite You have also to rewrite that entire piece of code, which is talking to everyone. Okay So that thing definitely you have to retain it or toss it away and rewrite it The next one is particularly interesting which is the so-called the replat from ink Replat from ink is a kind of in between thing between them Re-architecting that we were talking before and Rehosting which is the next topic. Okay, so replat from it means that you take the applications as they are but instead of Running them in the current platform. You're simply wrapping them in a different context. Okay This wrapping quote-unquote can be clean when the application allow it or very very dirty when they don't Okay, so think about it an example could be you take an asp net web application Which is self-contained inside of it's I don't know IS Webserver or I don't know a PHP application, which is nicely contained as a apache Website you take it you containerize it and you move it. Okay, so that's a pretty simple case of replat from it or you wrap it into into a pass layer Alternative could be Cloud Foundry OpenShift Azure service fabric whatever else. Okay, those are all examples of replat from ink Take what you have try to understand, you know, how it works divide it in services somehow and put it inside of a different platform It's not so easy usually because you have a web front end some middleware layers Some databases all those things that to be mowed and orchestrated in a way which they can talk to each other Okay We are a big fans of Kubernetes for this for this type of application because you're containerized and then you have a Kubernetes handling or the orchestration for you, right? But again, it takes work and you need to do it and it doesn't necessarily apply in all the cases So you might have actually to do a lot of hacking involved just to you know make it work Okay, let's talk about the next one now, which is Which the one which will lead to the rest of the conversation today, you know, which is the rehosting rehosting whenever you talk to a cloud purist about rehosting they typically make an expression like you know because it's something that It's against all the common sense in public clouds now because you take what you have the way Which is today and you move it to a new cloud exactly the way it is including all the defects that it has, okay? But what's a what's the point there? You have a big advantage and the advantage is that you don't have to care What's inside of your virtual machines? You take them you lift them you shift them and you make them run in the new in your context, okay? So people might tell you well if you do that you don't take any advantage of the cloud That's not true. You take a lot of advantage is just by the fact that you spare a lot of money From paying I don't know for our cloud infrastructure that you don't need any more Because the problem is that you you you will end up anyway with two clouds The old VMware for example, I mean I'm not pointing fingers at VMware and everything Okay, it's just like an example or system center or whatever else and the new one for example open stack, no while if you do the full Rehosting thing you can move everything get rid of the old cloud spare a ton of money Okay, maybe using the servers for foreign open for open stack compute nodes of whatever else and With the money that you spared improving your TCO you can invest them in Redeveloping here in our applications now so you can the rehosting the advantage of rehosting is that it can be done in a Almost completely blind way meaning you take the servers you move them and in the meantime you think about what to do next Okay, what to rewrite and everything so that that's actually the way That we we typically operate Well as I was mentioning so you won't take full advantage of the cloud model for the one And of course the target cloud solution might not have the host level of high availability that you're looking for So you you have to think about that, you know what we were discussing before that when you have For example a database that might actually rely on the host underlying host enough for high availability You might not have that on the target cloud. Okay So you have to be careful on that Rehosting is something that you can do manually you can take your virtual machines Move them over so extract them somehow off the source cloud think about getting the VMD case from from VMware Convert them to QCOW to import them somehow in glance talking about open stack. Okay, and Then doing a ton of manual steps to get it work on the new environment some examples of those steps are While the virtual disk format we already talked about that the syntactic kernel drivers, you know, because when you switch from One hypervisor to another there are a lot of differences It's two different type of virtual hardware so on VMware you have the VMware tools and on the other side you have virtual drivers or LIS or whatever else no In it are D in it are D are typically containing whatever drivers you need to boot your system And since you don't plan to have all the possible drivers in there You definitely might not have the ones for the target So you have to run the code or whatever else in order to regenerate those in it are the images no as a Linux By definition won't play very well if suddenly discovers that you have a different disk Under under your feet. No, so it won't prevent you from booting for that No, so I have to tell us a Linux to allow that PC I add ease will change radically. You have a different machine It's not different if you ever try to take a hard disk from a laptop and put it in another laptop and boot it It's the same identical thing. You will find so the operating system will hopefully boot But you might find a ton of differences due to the fact that you have for example different network configurations What used to be called the 80 is no more 80 is here. It's going to be 80 h1 because it will find a new adapter unless you go on you dev and You go on the net rules and you assign that specific name to a specific Mac address or whatever else and So that the system only boots it will know that that specific PC ID. It has that name. Okay Provisioning agents if I'm moving from VMware to open stack I will need to add cloud in it if I want to full take advantage of my cloud and Metadata API, right? If I'm going to Asia, I will need a Vali June Linux agent if I have windows and I move it from from VMware to to to to open stack I will need to add cloud base in it and blah blah blah All these things so if you Google for like, how do I move a machine for a VM from VMware to open stack? You will find a lot of blog post explaining all the steps now if you have to do it for one or two machines Do it manually. There is no particular reason to get crazy to to have full automation But if you're moving an entire infrastructure, we're talking about hundreds or thousands or more virtual machines You don't want to do it manually for a bunch of reasons first because it will take you forever and second because everything Which is manual is error prone, right? And since we are talking about moving to a software defined everything infrastructure Well, why don't we also software define the migration, right? And Okay Okay, one last thing about about the migration migration are relatively easy when you go Same platform to same platform the same Hypervisor to same hypervisor for example If I'm doing open stack plus KVM to open stack plus KVM, then I don't have too many steps to do It's much more complex when I do inter platform and inter hypervisor Migrations, okay. How in that case? Okay, so Now that we set the context Let's introduce Coriolis, which is the project that we wrote for exactly for for this reason It's a fully automated lift and shift migrations from and to any cloud virtualization solutions It's scalable. You can do one migration or a thousand at the same time It has a recipe I and it uses keystone for identity management. So here is a an idea about how the architecture works, okay Um It it has been written to look and feel exactly like an open stack service, okay the main idea is that you can just Register the endpoints in keystone and you will have a new migration endpoint Together with compute endpoints. I don't know. Okay, sender endpoints. I don't know swift endpoints or whatever You might have there. Okay As you can see it has Rest API front so the user will get We'll get a token connect and then whatever based on of course of whatever Access the user might have It will be able to perform operations. Okay behind API is there is a conductor So all those microservices are talking about each other via on QP So we are a bit like any other open stack service There is a scheduler which is in charge of well scheduling operations There is a database typically my sequel or anything which is be able to be Digested by sequel alchemy which contains Well the configuration of your migrations and son and then there are the worker processes which are particularly important in this context The workers are the ones which are actually connecting to the to your Infrastructures, okay, so you might have one worker talking to to VMware on one side and one worker talking to To open stack on the other it might be also the same worker, okay It's up to you how many workers you want to deploy in your infrastructure as I mentioned before this is meant to be scalable So you might have hundreds of workers if you want, okay also the placement of the workers can be strategically put in order to optimize the traffic between Between the workers so so that basically if you have to migrate a hundred gigabytes From for one infrastructure to the other you definitely don't want to send it out to a public cloud and then back to your infrastructure Okay, so that's why I believe that a SAS model a model for migration doesn't really work, okay It might be easy for vendors to set up Because this way, you know, it's easy to control But it's it's very Unofficial from any other perspective, you know, so the best way you can do is that you have full control of your workers All those things are as I was saying before regular Microservices written the open stack way so they are written in Python using Oslo and Keystone as I mentioned before and Barbican as part of their components, okay and And and they can run anywhere on windows or or Linux so for example We typically package them in in Ubuntu or or CentOS VM and they just run, okay So they're also very easy to distribute. So for example, if you want to To put a worker in your VM for infrastructure, you just take a virtual machine with a With the components running on top of it and you run it so nothing particularly complicated to install Okay, or even better a container Barbican Barbican is the open stack Project, which is meant to handle secrets and what secrets do we have here? We have two credentials to connect to the source of the target cloud, right? So you don't want to have your when were credentials floating around in clear text in with the risk that they end up in in log files now you want to make sure that That they will be safe and only the workers will be able to access them when they need it How do the worker access them? Well, you have a keystone token, which is traveling into the context across all those various layers So since you created the secret You are allowing basically with your token Coriolis to go and fetch the token for the the secret for you now Coriolis of course uses also Keystone trust in the process because of course a migration might run more than than the time That that your token can leave you know, so if immigration lasts more than one hour You will end up for a for one if you if you don't have a way to handle that Okay, Barb Barbican is of course optional so if you want to pass the credential in clear text you can do it Okay next The workers So Coriolis itself has no idea about VMware open stack AWS Azure or whatever else, okay? What Coriolis has is a fully decoupled interface so that you can write your own so-called providers Which are basically Python plugins which implement a given set of interfaces and those plugins know how to talk to a given cloud So whenever we add a new one We simply implement those plugins so we don't touch the core of of course itself, you know So of course there is a plug-in a provider for VMware it is a provider for open stack in this case now We distinguish them in import and export providers because you in this case We are exporting from VMware and importing into the open stack. Okay, but of course we could also do the other way around So this way you can for example Export from AWS and importing to open stack export one open stack and import it another and so on okay What's next Another important component that we will see pretty soon is the concept of OS morphers how we call them internally Which is what? Inspects the content of your disks determines what type of operating system it runs and performs operations based on what it finds, okay Meaning that if it discovers that there is a windows it will perform a given set of operations Knowing that it goes on open stack for example If it's an RL it will do some operation and a boon to some others up center some others and and so on okay also those type of morphers are fully decoupled so anytime you can add a new a new a New operating system and of course at those the steps that you have to perform Sorry different based on the on the operating system you are doing and the target platform that you are performing, okay What's next Supported clouds virtualization solutions, sorry open stack components. We already discussed it Here is a quick list of what we support today So we have open stack KVM hype and open stack with all the possible hypervisors Azure AWS V-Sphere of course system center send server Overton KVM Oracle VM and we plan to add also GC and Oracle Cloud soon, okay So basically whenever somebody comes and ask for one we do it on and of course it's it's open also for others to do it So This type of migrations Can be very error prone Why because There can be any possible type of transient issue while you're moving this stuff No, this is not something that takes three seconds And it's done is something that might take even half an hour or even more depending on how many gigabytes of try of data You have to transfer no so in that moment. You might have I don't know The connectivity between the that source and the target will fail because I don't know somebody Trapped on a cable or lights went off or whatever else So you need to have fully fully resilience you to make sure that things work, okay, and that's one thing another thing is that The task that you're performing are a lot and if you do them one after the other You are wasting a lot of time so what you want to do most probably is to parallelize as many as you can and Put sequentially only the ones that depend on on each other now So Coriolis is basically based on a on a on a task flow Meaning that we every every every migration is divided in a lot in a lot of tasks And for example connecting to the source cloud creating volumes on the target cloud Extracting data importing data and so on. Okay, you will see it pretty soon and Every task has events which detail informations about what's happening now and and those events are Containing updates which are sent back to the conductor and that conductor stores and in a database So anytime you can go and fetch the status of your of your migration and and you can display it in your user interface Whether it's a common line the web API or whatever Okay, so basically the main the main goal here is that you start the migration you go out You get a coffee you come back and the migration is done. Okay, so Coriolis is supposed to do everything I usually put a joke here and that the coffee has to be very very long. Okay I'm originally Italian. So that's a total blasphemy Because everything longer than an espresso doesn't work. So you might have something else to do But anyway, you don't have to babysit your damn Migration you can do something else come back and the migration is done and most important You can also start a gazillion of those migration at the same time and simply take a look at which one and see the status Some examples here in the slides that I'm not going through because I'm going to do a full migration now, okay What's next? supported guest operating systems today Debian Ubuntu. So basically every rationally Visible version of the very operating system Suze rel centers Oracle Linux Fedora open Suze Windows clients all the supported ones Window servers or the supported ones including nano server, okay We are also supporting to some extent XP in 2003 as you know, they are not supported by Microsoft But sometimes customers still wants to move them around. There is a rest API So it's it's it's a fully public So it's it's also possible to to play with it with postman for example if you want to develop around it There is a common line interface that I'm going to show you pretty soon And there is also a graphical user interface in the work, okay? This is fully it's a single-page application written react OS open source and so on, okay now We didn't talk about The elephant in the room, which is a downtime If I want to migrate all those work virtual workloads, I might not be able to do it properly if If I need to shut down my source virtual machines Export the data convert it and start it on the target. Okay, so how do we handle the downtime? We introduced a disaster recovery as a service feature That we call replica If the cloud allows you for example VMware does open stack does it and so on Data is backed up incrementally while your machines are fully working. How does that work? Coriolis will use the backup APIs that the platform is offering will connect to it and take a snapshot Let the machine running and simply extract the data under the seat, okay? Migration is performed as the last final step So you might decide for consistency to shut down your your machine at the last step So once you already have all the data on the target start the one on the target So you have just a minimum amount of downtime in the meantime, okay? Or you might just leave your machine up and running on the source and Never finalize the migration and do it and only if there is a disaster that will that won't allow you basically to Run the machine on the source now. That's why it's called disaster recovery as a service in this case now Um The good thing with replicas is that since the data is fully replicated on the target You don't need the source machine anymore. Okay, so it doesn't really matter what happened to it Examples of backup technologies seeing their backup VMware change block tracking Windows VSS and so on okay Another good thing about change block tracking and VSS is that they allow app consistency They do what in VMware terms is called caching the file system in the grass that shot if the guest operating system allows it and what's happening is basically that VMware is talking to the guest operating system via VMware tools and The VMware tools are instructing the operating system to basically stop any operations that would require rights on this for example on Windows VSS will talk to SQL server to Oracle and Applications which are unable to do that and those application will stop writing data on on the data files and only on the logs of the Database this means that if I take a snapshot and I copy over in the target and I start my machine My my application will be in a fully consistent state I won't get to a point in which I have a transaction which has been half written to disk just because in that specific millisecond I did my migration right On Linux it works in a similar way with five system freeze. Okay Okay, I think it's time for a demo. I have five minutes to go Okay, let's see So the first thing is this Coriolis endpoint list. Let me get it a bit bigger Those are for example three endpoints that I have in my in my demo environment right now One is open stack one is VMware and the other one is Oracle VM From from another demo before okay, all of them have an ID. So if we do a Coriolis endpoint show And I go and take a look at it You will see some information like a description a name and ID and everything and you can see that the connection info I Don't see any credential in clear text is just pointing to a barbecue and secret so what I can do is For example go do a barbecue secret get and as you can see here. I can see my actual credentials. No Okay Offer them environment, of course And and the same I will see for for the for the for the VMware one for example here you can see that is just a Jason object which contains my Keystone version username password the project name username the usual thing that you will expect now I can do the same thing for For for a VMware one here as you can see I have a different secret of course And in this case that you can see there is up again another test environment with with other credentials and everything okay now What can we do Coriolis I Will start directly with a with a replica since we don't have the time to do a full of course a Replicated to copy let's say an entire machine from one to the other we will do an incremental replica starting from an existing one Okay, so Coriolis replica list as I was saying so you here I have a bunch of them For example, let's take this Ubuntu one here. Okay Coriolis replica show and this is showing me some details telling me for example that I have a Origin endpoint ID and a destination endpoint ID Okay, and then it's showing me how many time I executed this replica meaning how many time I replicated the data from source to target, okay? now let's go to our VMware one and Here is my Ubuntu VM. Okay. Obviously my session expired let's get the console and Just write something into into this right like hello open stack I'm just doing this just that you can see that that there has been a change no Okay, good. The machine is running. I'm not touching it. Okay Now this is the ID of my replica and I'm gonna do Coriolis execute Coriolis replica Execute and pass the ID Okay, it started Now in order to show you all the content. Let's say in in in the same the same screen I will have to reduce a little bit of space. I have an execution ID, which is here And I'm gonna do Coriolis replica execution show with a watch in front. Okay, so what's happening here? It's a bit smaller text, but I can I can tell you what's going on there I have all the individual tasks each one has different status For example, here I have one that gets is called get instance info fetching information from Yemver one Which is called the deploy replica discs creating volumes on the target open stack Another one which is called the deploy replica source resources, which creates some temporary virtual machines Which are doing all the work? Deploy replica target resources which are deploying them on the target and Here you can see a lot of status updates for example here. It tells me that it creates a temporary keeper temporary part floating IP Spawning a VM and then it waits for connectivity on the SSH part there. Okay, so we create also temporary security group and everything Once we are done with that we move to the next step. Let's see if the solution allows Which I believe is the most interesting one Which is here at the bottom? Which creates a snapshot on the source and? Using the CBT API gets the latest Change tracking ID and tells to VMware give me only the changes from the last time that I executed this thing Okay, so now if you look I which we have only trim 3 megabyte change okay out of I don't know 5 gigs in total, okay? so and So that's really fast. What's happening here basically we take that data We compress it we send it over to an SSH channel to this temporary virtual machine running in OpenStack and With informations about what offset and what disk needs to be written okay? The temporary machine has all the volumes attached from that specific machine and simply writes that data in place Okay, at the end of the process we will have a byte-by-byte Identical image between the source and the target Okay When the replica is done? It will be marked as completed okay, as you can see we're already done What's next Well, you can repeat this process forever And you will just have a backup copy on your on your target machine Okay, so this part of the disaster recovery not only you have a copy of your data You have also all the information needed to recreate the machine like how many? CPUs are needed you know how much memory and stuff like that So at some point you might want to Migrate this replica so you do Coriolis migration deploy replica and You pass in the ID of the replica and this will start the migration process itself Coriolis migration show and With this we close it also. I can take a look at it So even in this case we are spawning a temporary machine very small Linux machine Which will simply okay get up we're waiting for connectivity and then it will start looking for What type of operating system is actually running inside there? Okay, should be just a matter of seconds until it will start and the next step will consist in Controlling the operating system and performing all the steps that we were mentioning before like Rebuilding unit are these injecting cloud in it and so on okay when that is done It will simply shut down and start a new a new machine, which is actually a fully migrated machine That's the main the main idea see if the demo gods are kind with us Correct. Yeah, so at this point we are totally so that the VMware node can catch fire in this moment Okay, we don't care about it anymore. Yeah, they keep her from the temporary machine No, yeah, it's totally different. It's created on the fly. Yeah, correct But physically I mean that it's it's a different instance, right? Yeah Okay discovering and mounting Do I spartitions? We can see by the way what's going on here as you can see I have a temporary machine running With a floating IP and everything okay, so you can see here. It discovered that it's an Ubuntu 14 or 4 So also based on the version you might have different actions And it will start installing the it removed that the open VM tools and it will start now doing additional things Okay, adding packages it discovered that it needs to add cloud in it All this is performed basically by CH routing into into into the partition and basically act as the Operating system itself note that we don't need any agent running on the source machine. Okay, that's very important We're pretty much towards the end. So at this point Even this part terminated those are called ice morphing. So we switch directly to finalize replica instance deployment What's happening when a task finishes the worker reports back to conductor and conductor says okay start the next process No, so it will simply elect a feasible worker that will perform the activity. I think we're pretty much done Let me see still running but we're at the end Creating migrated instance So this is the final machine that is actually going to to be running and containing The context that we were looking for okay completed. I think we are at the Last step which consists in deleting the term the temporary resources But actually I can remove the watch so I can also increase the size of the font Yeah, complete it as well, which means that also my migration is completed Now if I go here I refresh my instances my temporary VM is gone and voila I have a thing which is called Ubuntu 14.04 which is obviously the same name of the VM on the on the source now and If I click on it I go to the console Well to begin with you can see there is a cloud in it running now. Thank you Same things works of course with Windows and everything. Okay. We are slightly over time So I have to close it. I don't want to keep you here from going to the party. Actually, I hope to see you at the party but while While I wrap up I'm more than happy to answer any question if you have it after the know That's an excellent question. The replica is still there and can still running because the immigration can be done in two way one One is that we create a sender snapshot and we create a new volume out of the snapshot So we don't touch the original one So you can basically create a test migrations just to verify that everything works and only at the last moment You you can decide. Okay. I want to shut down the source VM and finalize and delete the replica. There is a parameter for that Of course, my source machine is still running. I didn't I didn't bother with it Any other question? Yep Excellent question. What about the networking on the VM? So here depends really on the target and source cloud so VMware has really not much information about networks No, so what we do is basically we create a mapping between the source and the target That I can see Here, let me see. Yeah, so we pass a map which tells that for example Network VM network local on the source cloud has to be mapped on a network called public on the other And another one called VM network has to be mapped on a network called private on on open stack Okay, so basically the deployer decides what we ever networks have to be mapped on what open stack networks Okay, open stack to open stack much easier because we can read the full network definition and recreated identical on the target Yeah, correct. Yeah That's that's the main idea if you have even static networking and you have private networks You will end up with the same identical definitions. Okay, what most probably will change are your public IP endpoints because you are going off from From between two different clouds Unless you're migrating to the same identical one. You can even migrate between on the same cloud between two different tenants Okay, but that's just the exception if you want Yep, yeah, you can do that as well Not necessarily because so If you have VMware as a source for example since it uses the CBT APIs, it will extract the content out of the disks So it doesn't matter really where the disk are residing Sorry domains From from an open stack to open stack perspective. I mean or Windows domains Tenants at the moment to be expect to have them recreated because the idea is that you have a tenant and you move all the data from that Tenant into another tenant, no? So it can be automated as well We never had it as a request, but we could theoretically even map tenant to recreate all the tenants as well as part of the process Excellent question. So the big bulk of it the core of it. It's all open source We keep just a small amount of think as close, but our goal is to open source everything at some point So that they said a big part of it is open source or the open stack part is open source as Well, the Microsoft so the cores are saying including the migrations open source They provide there's some of them. We keep them close source and some with your open source Okay, our final goal is to open source everything. That's our primary thing Okay, yeah, I will tell you more we retain even Mac addresses Yeah Yeah Yeah, the idea of the Mac addresses because it simplifies a lot then the recreation of the Of this or when the machine starts it finds the same identical Mac. So we don't even have to bother in Yeah, we can do it also if you cannot retain the Mac addresses But in that case it becomes difficult if you have more than one network adapter Yeah Yeah, thank you guys