 OK, so good afternoon, everyone. Thanks for joining us just right after the lunch. So I know it's a bit lazy. I will try my level best to make you entertain as much as I can. OK, so today we will be discussing Gertz. Gertz is a migration tool which migrates resources from other clouds to OpenStack, OK? So my name is Bharat Kumar. I'm an OpenStack specialist at Aptira. So how many of you know Aptira? So Aptira is Asia-specific's leading OpenStack services company, so it maintains private cloud, public cloud, and hybrid cloud, as well as it also provides technology training to the OpenStack specialist and all. So it has many things. So I was leading Gertz project. I was designing, so designing, developing, and maintaining the Gertz project in Aptira. So yeah, so why we have to migrate? Why we have to migrate? So here I'm using the word migrate. Here migrate is nothing but migrating the instance, migrating the resource, or workload. All these three are same, OK? So workload can be either a mission, or a application, or a volume. It can be anything, OK? So why we have to migrate? Let us suppose I have taken an instance in Amazon AWS long back. Later, one thought came to my mind. So I just want to move it to OpenStack cloud, OpenStack public cloud, or OpenStack my own private cloud. So how can it do that? I can't simply rewrite all my application in the new OpenStack cloud, right? There should be a tool, or there should be a mechanism to migrate my instance from AWS to OpenStack public or private cloud, OK? So not only from AWS, so it can be VMware, or Hyper-V, or in vice versa. So we should be able to migrate OpenStack resources to AWS and to Hyper-V. So we should be able to do in both ways, OK? So this is what the migration is. Hope the migration word is clear, OK? So can we do the migration manually? So can we do the migration of instance, or a volume, or a network manually? Yes, we can do it. Absolutely, we can do it. But so what is the problem in this, OK? So there are lots of documents available, lots of blogs available. If you Google for it, so how to migrate an instance from AWS to OpenStack or Hyper-V to VMware, anything. So you will get lots of blogs. If you follow those blogs, you will get all the details, OK? How to migrate, how to convert the disk, so what are all the tools you need to install? So everything, you will get it. But there are many difficulties. We spent a lot of time talking with the administrators and all knowing what are the difficulties they are facing in the migration phase. So the migration process depends on the type of cloud, OK? So what is the source cloud and what is the destination cloud? So it depends on the source and destination clouds, the migration process vary. So migrating instance from VMware to OpenStack, there are separate steps. And migrating the instance from AWS to OpenStack, separate processor, OK? So that depends on the type of cloud we are using, OK? And it also depends on type of the instance or type of the resource, OK, which type of resource it is. If the instances of Windows mission, the steps will vary. So we have to, that is formatting. Everything is different, OK? So if the instance type is of Linux, that is different, OK? So it depends on the type of the cloud as well as the type of the instance, OK? So it's not the same way to migrate everything, OK? And the second thing is, so there are some hypervisor-specific tools, OK? So if we are booting instance in VMware, it requires VMware tools. If we're booting instance in OpenStack, it requires Vettiva tools, OK? So when we are migrating, we also need to take care of those tools, OK? So we need to remove the VMware tools as we are moving out of VMware. And we have to install Vettiva tools as we are moving into OpenStack. So we have to manage everything. We have to do everything manually, OK? And so there can be too much amount of data. So one instance is nothing but a single disk. It can have many disks attached to it. So maybe in terabytes of data, so petabytes, so we should be able to migrate everything in a secure way, OK? So the same process, OK? So all these are acceptable, OK? So I can do all these things of my own manually. I can follow the blocks. I can follow all the tools and all. But I can do it for one or two resources. I can migrate one or two instances. But let's imagine I have total hundreds of instances, hundreds of resources. How can I do all those? So it's very repetitive and very complicated steps. So to do that, so we need to have an automated tool to migrate everything at once, OK? So the migration should be simple. So we should be able to migrate the instances in a simpler way, in an automated way, OK? So this is where Guts came into the picture, OK? So what is Guts actually? So Guts is a workload migration engine which migrates, which automatically moves the resources from traditional clouds to the open stack. So those traditional clouds can be VMware, AWS, Hyper-V, anything. So that's what Guts is. Here, what Guts provides is Guts provides a automated, robust, and efficient way to migrate instances, migrate resources, OK? Not only the instances, resources. We can migrate anything from any cloud to any cloud, OK? So in this talk, I'm just concentrating only open stack as a destination. But using Guts, we should be able to migrate the resources from open stack to VMware and open stack to AWS. Even the vice versa is also possible, OK? OK. So what Guts do? Guts migrates workloads. As I said, workload is nothing but so it's a resource. It can be an instance. It can be a volume. It can be user application. It can be anything, OK? So it migrates the workload between traditional platform to the open stack, like VMware, Hyper-V, AWS to open stack, and public cloud to private cloud, OK? I just want to move out my VM from public to private, OK? So we can do that using Guts, OK? And Guts also migrates. This is the most important one. It has the more capabilities. It has more features in this. So we can migrate instances or resources from open stack to open stack as well, OK? So we'll see, OK, so this is what traditional virtualization platform to open stack, OK? So here we can migrate instances from VMware, AWS, Hyper-V, et cetera, to open stack, OK? So if you go to public to private, so here you can use any public cloud, OK? So like AWS, Microsoft Azure, so anything. And you can migrate it to your own open stack private cloud, OK? So open stack to open stack. So why do we need to move one open stack resource to another open stack resource? So what is the point there? So you know, right? So for every six months, new religious are coming up. So new features, new bugs, everything will be resolved. So it's a complete set of software for every six months, OK? So let us suppose I have deployed my private cloud. I have deployed my private cloud using iSouse, OK? Now I just want to migrate it to Mitak or Newton or Okata. So what I have to do is, so I want my same instance to be there on the new version of open stack cloud, OK? So to do that, we can use open stack, OK? So using open stack, we can migrate instances. We can migrate almost all open stack resources, OK? So yeah, so open stack to open stack migration requires in this case, OK? So when we do the cloud upgradation, so when we are upgrading the cloud or any maintenance phase, when we are moving the resources out of source open stack cloud to the destination open stack cloud, we can use Guts, OK? So currently, Guts supports almost all the open stack relages from iSouse to the latest one Okata. And it continues. It's not the limited, so it continues, OK? So we can migrate instances. We can migrate any resource from Kilo to Newton, Kilo to Mitaka or Mitaka to Okata. So anything, any combination is possible, OK? So in open stack to open stack migration, it can migrate almost all open stack resources, like it can migrate tenants, it can migrate users, it can migrate security groups, key pairs, flavors, networks, volumes, and instances, OK? This is just, this is the current set of resources that we are supporting. In future, we can add more. We can add heat stacks. We can add anything. So everything is pluggable, OK? So we can easily add new resource types, OK? So using this, what we can have is we can have the same set of, same set of environment, same users, same key pairs with the same name, OK? So the key is not the same, but with the same name, OK? Even flavors, security groups, everything will be there, OK? So you will feel the same environment in the new cloud as well using GERDS, OK? So this is what the advantage is. And so obviously, we can migrate user applications as well. When we are migrating, when we are migrating workload, when we are migrating volumes and instances, obviously, user will get his own applications on the new cloud, OK? So there is no doubt in that, OK? So now we came to know that the missing piece in the migration puzzle is GERDS, OK? So now let's know more about GERDS, how it is going to function, what are all the internal things and all, OK? So GERDS is the distributed, interactive, synchronous, and pluggable product, OK? It follows. It looks like all remaining OpenStack components. So it is distributed, and only it is synchronous, pluggable, everything. So it has almost all OpenStack features. So it is distributed, OK? So like other OpenStack components have some internal components, right? So in the same way, GERDS have GERDS API, GERDS scheduler, and GERDS migration. So all these three will communicate through a messaging queue bus. So those three will be in sync, OK? And those are interactive. So what GERDS do is GERDS will interact with the source cloud restful APIs to get the information about the resources. And GERDS will also interact with the destination cloud restful APIs to create resources on the destination side, OK? It knows how to communicate with source and destination clouds, OK? Let's suppose, so when I'm going to add a new cloud, Microsoft Azure to support GERDS, OK? So we should have a driver which interacts between GERDS and Microsoft Azure. And so that will be in synchronous. So let us suppose I'm going to migrate. So using GERDS, I can migrate one to hundreds of instances at a time, hundreds of resources at a time, OK? But these many migration process are going on at a single point of time. So there can be many synchronous issues will be there. But GERDS will follow the synchronous. It will have all the logs and all. So everything will be very robust way, OK? So it will work in a robust way. And it's pluggable, OK? So like other OpenStack, other OpenStack components like Cinder, Nova, and all, it just follow the pluggable architecture. Like so in Cinder, if you know, the Cinder volume will be there. So we can add, let's suppose if I'm OK, LVM is there, or Glaster is there, or Surface there. They're going to have their own drivers. They're not going to disturb the core Cinder functionality, the core Cinder modules. They're just adding the new features. They're just adding the features to the driver's file. In the same way, GERDS doesn't know anything. GERDS doesn't care about any source cloud or destination cloud, any type of, it doesn't know how to communicate with VMware or OpenStack. It doesn't know anything, OK? So what it will do is we have to write our own drivers to communicate with OpenStack or whatever the cloud that we want to support. So we design GERDS in such a way so we can add or remove clouds at any point of time so OpenStack operators can do that, OK? So let us suppose if tomorrow if we want to add support to new type of cloud, I can do it. I can do it using this pluggable architecture, OK? So it has some set of functionalities that we have to implement when we are adding a new cloud, OK? So these are the system components of GERDS, OK? Like other OpenStack services like Nova and all, we have an API. So we have a scheduler and we have migration engine, OK? So what GERDS API will do, like other API services, this API service accepts and responds to the request from the end user API calls, OK? And it enforce some policies like administrator can do these many things and normal user can perform only these things. So GERDS API will enforce all these policies on the operations, OK? By default, it listens on 7,000 port, OK? I hope none of the OpenStack services are using that port. So it listens on 7,000 port, OK? And if you go to the scheduler, it schedules the migration operation to the appropriate migration node. So we can have any number of migration services running. It is a completely distributed way so that we can have GERDS API on one node, GERDS scheduler on another node, and we can have multiple GERDS migration node on separate nodes, OK? So like Nova Compute, Cinder Volume, like that, OK? So here what GERDS scheduler will do is it will take the request from the API and it will forward it to the appropriate migration node, OK? And also what scheduler will do is it periodically collects the status of each and every migration node. It will know for every one minute or every two minutes again it is configurable. So whether that migration node is up and running or not, how much conversion space available, how much free space available, and so whether it is reachable. So everything. Everything it will calculate. So based upon that, it will apply some filters and then it will schedule it to the proper migration node, OK? So yeah. And this is the actual migration engine, so GERDS migration. So this is the service. We can have multiple instances of this GERDS migration service. So we can run on as many services as we want, OK? This is this demon will do the actual operation. It will get the resource from the source cloud. It will create the resource on the destination. So in the intermediate, we have many things to do, like this conversion, installing and uninstalling tools, everything will be done using GERDS migration. Any doubts still now? OK, so this is the architecture. So we have GERDS API. So end user will communicate with GERDS API. And along with that, we have GERDS client and horizon client. So through clients, end user will communicate with GERDS API. So GERDS Python client is that. And we have a centralized GERDS database that can be MySQL or Postgres SQL or anything. And we have common messaging quebers, that is RabbitMQ. So we are using by default. We can use QPIDR, 0MQ, or anything. So we are using GERDS scheduler to schedule the migrations to the underlying migration nodes, OK? So these migrations will communicate with the underlying cloud environments. So those can be OpenStack. Those can be VMware or Hyper-V, so anything. So you can see all, I don't know, the colors. So let's take it right, OK? So those red colors are all restful APIs. And if you see the green color, those are RPC calls. So this is the architecture. Now see what are all the features. So first of all, let's see what are all the OpenStack resources that GERDS supports to migrate. So GERDS can migrate instances, GERDS can migrate volumes, networks, security group, flavors, key pairs, et cetera. So let us suppose if you want to migrate, if you have a specific tenant, in that tenant you have some set of flavors, some set of key pairs and all. If you want all those on the new cloud, you can migrate it in a single way. So in a single event, you can migrate everything. So it will create the same set of flavors, same set of key pairs, security groups, everything. And one more thing to point is, so GERDS can also migrate fmrl disk. So generally OpenStack, in NOVA, we cannot snapshot fmrl disk. We can snapshot only the root disk. But GERDS can even migrate the fmrl disk space as well. So it can migrate that disk as well. And so when you're migrating an instance, if any volume is attached to that instance, even that will be migrated to the new cloud and attach it to the destination instance. So these are all the set of resources that we support in OpenStack. And what GERDS do other than these resources, so it also deal with hypervisor-specific operations. Like it converts the disks. Let's suppose in VMWare, we have VMDK. In OpenStack, KVM, we support KickOut too. So we have to do the conversion. So GERDS will take care of that. So it will analyze which kind of source and destination cloud it is based on that it will convert the disks and upload it to the destination. And it will also install and install hypervisor-specific tools, like VMWare tools, whatever tools, and anything. So it depends upon the cloud, depends upon the source and destination cloud. So actually placing the GERDS migration service is the key point here. So I will go to that in the next slides. And other than these features, what are all the add-ons you have? So we have developed a horizon plugin so GERDS UI can be configured, can be accessible from the normal horizon, like other services in the OpenStack. And we also have DevStack plugin so that we can automatically install GERDS using DevStack as well by using this DevStack plugin. And we have Ansible and Puppet modules. So these are the modules that we have. And it has some capabilities to roll back. So GERDS, I mean, migration is not a simple process. So it is a set of events. If something fails, it will roll back everything and it will take to the first step so that our infrastructure, our cloud, is in a consistent way. And it will automatically clean up. So when the migration process ends, so it will clean up all the temporary resources that were created during the migration. So this explains the GERDS workflow. So here, this is a bit complicated. So yeah, you can see the top one is the GERDS node. So it has Keystone. So GERDS using Keystone authentication. And it has other services, GERDS, APS, KDULIA and migration. And on the left hand side, we have source clouds. And the right hand side, we have destination clouds. So let's take one example, which is pointed as one. So from VMware to OpenStack, so the red VM. So on source and destination, on both the things, we have GERDS migration service running. So as it is VMware, hypervisor. So that will be in the format of VMDK. So we have to convert them to kick out too. So as we have GERDS migration service running on both the hypervazors, both the clouds, we can do the migration operation at any time, so at any place. So we can use either VMware's GERDS migration node or OpenStack GERDS migration node. That's up to the scheduler. Scheduler will randomly pick up one migration node and schedule it to that one. So that is how the migration will happen in the first case. If you look at the second case, which is a green VM, from OpenStack to HyperV. So when we are migrating the instance or a resource from OpenStack to HyperV, so on the HyperV, we have GERDS migration service running, but OpenStack Cloud doesn't have any migration service. So all the conversion, all the migration process, the engine will run on the HyperV side. So what it will do is the migration node, the migration service will pull all the resource details, resource data and all from the OpenStack source cloud, and it will do the conversion there in the HyperV side and upload it to the HyperV. So and boot the instance there. So that's what the second case. If you look at the third case, when I'm trying to migrate the instance from HyperV to VMware, both the clouds doesn't have GERDS migration service running. So I can use any of the other available GERDS migration node. So for that instance, so I'm using the top one, so GERDS migration. So in this case, what will happen is the entire resource, the data, will be copied from HyperV to GERDS node. It will convert, it will do some disk operations and all. It will install and install some cloud-specific tools and then again, it will send it back to the VMware. To VMware. So that takes more time compared to earlier two cases because it has to upload the entire data from HyperV to GERDS node and again back to GERDS node to the VMware. So this takes more time compared to first two cases. So placing the GERDS migration service is the key point to reduce, to optimize the migration process. Okay, so we have some demo here. So actually I thought there will not be any internet available in this, so that's why I just captured few screenshots. So I'm just going to show those things. Okay, so this is CLI, this is using CLI. So here in this case, what I'm trying to do is I'm trying to migrate, I'm trying to show the migration between VMware to OpenStack. So VMware to OpenStack. So here I have source cloud and destination cloud, okay? So here the source will be the VMware and destination will be the OpenStack, okay? So this is the command to add VMware source cloud, okay? So I'm going to add a source cloud. So using GERDS source create, so GERDS has its own Python client. So using that, we can add VMware source. So we have to pass all the credentials to that, okay? So we have to pass vSphere or VMware credentials to that, like what is the username, password, on which host it is running and which port we need to use and all. So all these things we have to pass it to the GERDS, okay? So after that, if you do the GERDS source list, you can list out what are all the sources available. As I've added only one vSphere source, I can see only that one, okay? And this is the command to add the destination OpenStack cloud, okay? So in this case, I'm adding OpenStack destination. So I'm passing all the parameters which are required to communicate with the OpenStack cloud, OpenStack destination cloud, okay? So and what is the neutron API it is using? What is the Cinder API? So Nova API, so everything. So it has some default values as well, but to be more specific, we need to give the actual API versions, actual API client versions, okay? So here on the first screenshot, you can see what are all the resources available. When you do GERDS resource list, so after adding the VMware source, what it will do is GERDS communicates with the source hypervisor. It will pull the list of resources available on the VMware source, okay? So those are the three resources available. So test VM one, test VM two and three. So these are the three VMs available. So we can migrate any of those VMs to the destination OpenStack cloud. So this below is the command to do that, okay? So GERDS migration create, we can provide some name and we need to specify the ID of the resource or the name and we have to specify the destination name. So here the destination is OpenStack. So after this, you can keep track of your migration, you can keep track of your migration operation. So if you see, very first time when I'm doing GERDS list, okay? So it is showing the event as scheduling migration host, okay? So it is trying to selecting the migration host, okay? It is trying to select the migration host and it is downloading instance disk from the source as the second status. If you look at the third one, it is converting the disk to kick out. So as it is from VMDK to OpenStack, we have to convert the disk from VMDK to kick out, okay? And so if you look at the fourth one, uploading it to the destination glance, okay? So after that, at the end, you will get the success status. That means VM has been booted on the destination. So you can also see what is the destination UID? What is the resource UID at the destination, okay? So that is what updated in the last GERDS list command, okay? So this is the CLI example. So to migrate resources from VMware to OpenStack, okay? And we also have UI demo, okay? So this migrates the resources between OpenStack to OpenStack, okay? So here I have already added the clouds, okay? So the first one is, this first one shows the list of source clouds that I have. I have only one source cloud, which is OpenStack. And so that OpenStack version is, it's very small, okay? So first one is kilo open, I mean, kilo version of OpenStack as source. And if you look at the destinations, so that is Mitaka. So in this case, I'm planning to migrate my resources from kilo to Mitaka, okay? So here, these are all the list of resources that I can get from the source VM, okay? So, and I can migrate, okay? I can create the migration operation. So in the migrate create operation, so what it will do is it will ask for many questions, like when you're trying to create a new VM on the Nova side, so it will ask you for availability zones, what kind of, how much amount of RAM. So all these things will be, all these things that we need to configure, right? So in the same way, it will ask for security groups, which keep it to use and all. If you're not providing anything, so it will try to fetch those things from the source cloud, source OpenStack cloud, okay? So, yeah, so here it is asking me to select the, okay, so give the instance name, flavor, network, security group, key pair, and all. So if I select the appropriate one, it will try to boot the instance on that particular security groups and all. So after that, I can, so it's a, I mean, so we can directly show the status here. So it will keep on updates status till it comes to active or maybe error as well. So this is what OpenStack to OpenStack migration between, so using UI, okay? So yeah, that's it. Any questions? Yeah? Yeah, so the question is, is it open source or not? So, no, it's not open source. Sorry? Sorry, which VM? Yeah, it supports, yeah. So, over? Over means? That depends on the source hypervisor. So in OpenStack, it supports live snapshotting, right? So what we can do is, as instance is running, we can take the snapshot. So it doesn't have any downtime. So, but to more secure, if user wants more secure, then we will shadow on the VM for some time and we will take a snapshot and again, we will power it back, okay? So that's again optional, okay? So in some hypervisors, in some clouds, okay? So that doesn't support live snapshotting. So at that time, we should download, we should download the VM and we need to take the snapshot. Again, we should power it back. Question here? Yeah? In terms of commercial offering, how you guys are organizing these, this is a software as a service basis, it's a license or subscription basis. How bacteria is, it's promoting the guts. And even though, even if you don't have these already settled, but how you guys are thinking about going on. And secondly, the, what are the examples? Not mentioned names, but cases that you guys already have on production that you're working with. I can answer the commercial side. So basically until now, we've kind of had it as an add-on to our consulting and managed services. So a customer comes to us and goes, I'm doing this project, I wanna migrate the VMs. We would include guts as a part of it. We are looking at doing it on a per VM or a per migration basis. Or we are open to partners coming to us and us bundling our solution with them. I mean, we are a company that believes in open source and we are a company that believes in working with partners. So if there's a model a partner can think of, we'll be happy to work with it. As far as production is concerned, most of our use cases have come from kind of the big data side and also from where people have wanted to implement big data, but they're stuck with VMware and they decided to go with dish the whole thing and start afresh. And that's, we have about three or four of those in production right now. And another big kind of use case that we have in production is when people have not upgraded their open stack for too long and suddenly they're now looking at ice house to Newton and it's just easier to spin up another control plane and move the VMs across. Thank you. That's, if you allow me, as you were answering, one of the things that, what is the experience that you guys have because the scenario that you describe it, customers that are currently using VMware and they're moving to a second generation cloud based at an open stack, that's what we are seeing all over the place. When it comes to an open stack distribution from let's say, HedHed or even Suze or Mirantis, how the gut solution fits into the supported configuration, meaning it's gonna be deployed at top of, let's say, HedHed OpenStack Platform 10 and it won't be disrupting anything. How it works? So, guts will work on a standalone Keystone. We don't need to be a part of destination or source. You give me a physical machine or you give me a VM. Perfect. And I can install guts in between with its own Keystone, its own RabbitMQ. Basically, we just want two public and API endpoints. We point to either the NOVA, Neutron, whatever you wanna migrate. If it's on a public API endpoint or an internal URL, if you want to avoid HTTPS. You use the API, right? Yeah, yeah. And it just does the migration. Awesome. What we rely on is the vendors looking at the open stack interop certification and having those ticks. So if the API is broken, there's nothing we can do. Exactly. Right, but if the vendors are all doing the right thing and so far we haven't noticed a vendor who isn't following the API. Right, they might have added stuff like there might be SAF running. But that's something we take into account before we do the migration. So for us, it's a very simple thing. If everyone is playing by the rules, if everyone has API compatibility, it's very simple for us to pull VMs from one and push them to the other side. Yeah, all right. Thank you very much. Not a problem, yeah. Yeah. Reliability, yeah. That was, I didn't mention that, but obviously it has the reliability. So we are just keep, it has set up events, right? After every event, we are checking whether that event is successful or not. So if it is not successful, we'll redo it again and we will go back to the previous stage so that we can maintain the consistency and reliability. Yeah, after creation in the destination side as well, we are just verifying whether the same kind of data we have or not. So everything that we are verifying so that we can have the same, then only we are saying that this migration is successful. Okay, thank you.