 Welcome everyone to another OpenShift Commons briefing and today, very excited we have Miguel here, you know, he has been working on and the engineer with the engineering teams on these migration tool kits and we're so excited for this migration toolkit for virtualization. It's definitely been long awaited. Miguel, do you want to introduce yourself and talk to us about migration? Sure. Sure. Well, my name is Miguel Perez-Colino. I'm a product manager. I'm the product manager for migration toolkit for virtualization. And I'm in the modernization and migration team in Red Hat. And, well, I'm taking care of this tool that has reached the beta stage just last week. And I was like, okay, I have to go to OpenShift Commons to show it. So, we are going. Okay. So, first things first, OpenShift containers and VMs. So, the migration toolkit for virtualization is intended to move virtual machines from, initially from VMware to OpenShift. And then in the future, we'll add more sources and we'll keep the same target. And the target is OpenShift virtualization. So, what is this about? So, this is saying containers are not virtual machines. And another saying that goes containers are Linux, and Linux is containers. So, the containers is a way to isolate process. It's like a super isolated process using camera line spaces, C groups, and SE Linux in behind the price Linux to be able to isolate those processes very, very well. And, well, virtual machines require a guest OS and a hypervisor running. But the thing is that, you know, in Linux, virtual machine is also a process. And we could encapsulate that process. And the thing is that the operating system Linux has the kernel of virtual machine, which is like an engine to be able to run virtual machines, which is used like in almost every public cloud for virtual machines. For virtualization purposes. And the thing is like, well, this is very performant. Try to think what 1% increase in performance would mean to a public cloud. So, try to understand that, I mean, this is like a super, super good engine for virtual machines. And it's included in Linux, it's included in Rehab Enterprise Linux and core OS that they use in same kernel with same KVM. So, we could leverage that and all the experience we have in Rehab virtualization and open stack to be able to build virtual machine capability on OpenShift. So, this is how we get to OpenShift virtualization. What is OpenShift virtualization? Well, we have adapted Kubernetes and OpenShift to be able to run virtual machines. So, we created the project QBird two years ago and no, three years ago, if I recall correctly, and QBird matured and became pretty solid. And then one year ago in April 2020, we released OpenShift virtualization and now it's available to run machines, virtual machines in OpenShift, next to your containers with all the benefits that brings being able to run something on Kubernetes. You know, like this declarative way of deploying infrastructure, all the operational benefits with Prometheus to gather metrics and all those benefits that, you know, unloved from OpenShift plus the interfaces to network and storage that OpenShift has been developed for so long. So, that's good, but what is this for? You know, it's all about modernization and migration. You know, customers and users worldwide and developers, they all want to become more modern because it brings a lot of benefits. So, when development teams start using containers, they become more agile, faster, they have a lower time to market, they release more frequently, in case there's an incident, their time to restore is reduced. There are these metrics that we all know that get improved when you start working in a cloud native way. So, what if we could move these workloads that would make sense to have next to containers into OpenShift and have them next to the containers so they could behave more like you are in a container and then over time modernize them. So, it's a lower friction way to bring those VMs into a container like world and have one converge infrastructure for those critical workloads that we're going to manage and then be able to do all that low modernization process step by step during the time. So, what are we thinking? What if we could automatically convert VMware images to KVM images on OpenShift? Well, that would lower the cost of migrating the workloads. So, that's a direct benefit of this. So, what would this look like? So, let's say we have these weblogics with Apache Frontends and the database running on VMware. We could move them to virtual machines and then modernize, for example, the Apaches. They are pretty easy and then put those Apaches in containers or even change it for nginx and then be able to, okay, all those Frontends make them direct them to containers. Then we could move to, for example, take some applications from WebLogic to Jboss or keep them in WebLogic on containers. That's something you could also do on OpenShift, running WebLogic in containers, same as WebAsphere, Jboss. But in this case, let's think, okay, we're going to move this application to Jboss. Let's modernize the application. Let's make it leaner and more standards oriented so you can run on Jboss. So, we are modernizing that and then we could end up modernizing all the applications and even moving the database into containers at some point in time. So, this makes a very easy iterative approach to modernization. It's a really good way for you to be able to, as I say, first shift the workloads into virtual machines and then do the modernization at your own pace. How do we approach this? Well, you have the workloads and there are some strategic workloads and some not strategic workloads that you could analyze with Assess with Pathfinder and analyze with migration toolkit for applications. And well, what can you do? You could replatform them as VMs, you know, so you just move them whenever they are VMs, as VMs, whenever they are containers, as containers. You could refactor them and repackage them as containers. So, do the modernize the application. You could repurchase if it's a third-party application and then put them in OpenShift project. So, you test them in OpenShift. They are good. They are not good. We fix them. We test it. They are good. Once they are good, you deploy to production in OpenShift and then you're modernizing. What if we want to further enhance? Well, we could go to the refactor loop again and improve, improve, improve. So, this is, for example, a pattern that we have seen with large monoliths that you split them and you modernize them, modernize them, modernize them. And then modernization is complete. What do you do with the non-strategic workloads? Well, retire, rehost, retake. So, some of them you could retire them. Some of them you could rehost them. Some of them you could retain. The retire has, like, a second fold. Like, for example, if you're running your own email process, probably you want to move it as software as a service. So, there are several options for the non-strategic workloads that could be provided as a service that you could consider working on. So, we are working mostly on the replatform, refactor, and repurchase focused on replatform and refactor. For migration token for revitalization, it's fully replatformed to move VMs from VMware to offensive revitalization. What are the benefits and the operating costs? Well, you see there are more operating costs in every host to retain because you have to keep your current infrastructure. If you retire, it's pretty easy, but you lose some services. And then when you do replatform, there's more business benefit. And when you do refactor, there are even more business benefits that you could obtain by doing this. So, what do we do in my team? Well, we have tools for these cases. In ReHouse, we have a tool called MTC, migration token for containers, that the upstream project is called Crane. Then for the replatform, we could move containers using move to queue that will bring the containers running on Cloud Foundry to OpenShift and Forklift, which is the upstream for the migration token for revitalization to move VMs into OpenShift revitalization as VMs. And then the refactor, you have the Pathfinder and Windup projects that will result into the migration token for applications. And then you will be able to analyze and assess the applications to be able to first, with the assessment, be able to choose which applications to work first. And then with the analysis tool, be able to start transforming them to put them in containers. How about the tool? Any questions so far? Is there anything on top? Oh, not yet. This is awesome. Keep going. Okay, I keep going. So, what I said, we have these projects in the upstream. This upstream is conveyor.io project. I really suggest you to visit it. If you go to githab.com slash conveyor, you will find all the projects that we put in there. Some of them were still not migrated. This is like pretty fresh, pretty new. So, with the desert to go there, there are like mailing lists, there are forums, and we even have some meetups to show like the inner links and all the technical stuff on the projects and be able to help everybody to join and be able to contribute. So, you see this project crane, Forklift and Tackle. Crane, then we have the downstream, which is the tool that we provide the migration token for containers to migrate from OpenShift 3 to 4 and also from 4 to 4. So, in this case that you have a cluster that is getting full and you want to move some applications outside of it, and your pipelines are not that easy to repurpose, then you could use MTC easily to move those containers and their persistent volumes easily from one cluster to another. And you have Tackle, which becomes the migration token for applications, as I said, to assess and analyze applications. So, you could analyze the applications, Java applications with this tool and it will tell you, okay, you have these things in the application, like for example, you're using a proprietary logger, or you have Windows specific paths, or you're using a proprietary class from the Oracle JDK, and you want to move to OpenJDK, well, you can use MTA, the migration token for applications for that. And what I'm going to talk about is the migration token for virtualization. So, what does it do? It's prepared to do migration and scale of virtual machines to OpenShift virtualization. So, we have built tools before to do migration and scale and we have used those tools before to do migration and scale. So, thousands of VMs have been migrated and now we are building this tool with all the lessons learned from the previous tools that we've built, but with the target for that the target is going to be OpenShift virtualization. So, you can mass migrate virtual machines to OpenShift virtualization. Where are we now? Well, the beta is out, so it's very easy for you if you are on OpenShift to be able to install it and I will demo it in some minutes. About the architecture, everything OpenShift, everything container native, everything in containers. So, we are using all the natives that we can. You see that we have a source, which in this case is VMware BSphere. And during this year, we'll be adding revitalization and OpenStack as sources. So, just in case you want to move VMs from these sources to OpenShift virtualization. We have an inventory service that is going to gather all the information from VMware BSphere. And we have a validation service that is going to check, okay, how is this VM configured? And it's going to run checks there. And if something is not right, it will raise it and will say, hey, I found this that could be an inconvenience to migrate this VM into OpenShift virtualization. So, you do not start a migration that could fail, okay? So, maybe you need to check things like road device mappings that are attached to the VM and that you want to keep as roadways mappings or that two VMs are sharing a disk and you don't want to end up with two VMs with two disks, but with two VMs with one single disk that is attached to both of them. These kind of things are the ones that are checked before. So, the migration is run as smoothly as possible. So, we are adding more and more rules to the validation service to ensure that when you migrate a VM, it's going to be as successful as possible. Then we have the user interface, of course, built with pattern fly for. I love the pattern fly project. It makes our interfaces look so nicely. And the thing is that we try to make it as simple and nice as possible to be able to be used, even if it's powerful, try to make it really simple and nice to use. And what do we have in there? We have mappings to be able to map resources from source to target. We have migration plans to be able to say, okay, which VMs are going to be migrated in the same batch, and then we have the migration run to execute the migration. And then, of course, there's the controller and then there's the capability to import VMs in offensive utilization that we'll leverage to be able to move it. And then, of course, the import operator that is handled by it. So, this is the architecture. If you want to have another session with more technical details, we can invite my friend Fabien Dupont, and we could have another session to talk about the internals. What else? Okay, providers. First thing, we need to connect source and target. So, we have a provider that is the sources right now, VMware vSphere, and we have the target that is offensive utilization. So, you have to connect the tool to the provider, which would be VMware vSphere, provider credentials, and also to the target. When you deploy migration token for visualization, the OpenShift instance in which you deploy it, it gets configured automatically as a target. So, very easy. If you want to do a simple migration, it's going to be very straightforward for you. So, we use the sources, the destinations, so we have, okay, this is from where to where. So, and now how? How do we change what is already there? So, normally, what you have in the source is a set of configured networks. They normally attach to VLANs, depending on how you configure it, but it's pretty common that you have a certain set of VLANs that you attach to your visualization network, that one of them is, for example, to access a storage, another one is for administration, another one is internal, another one is the DMC to be able to publish services outside of your environment. So, these are the network mappings, the network configuration that you have in your source, and you have to create the mappings now you have to do it. So, what you do is that you deploy your new environment, your OpenShift environment, and you configure these networks in OpenShift. So, once you have configured the networks, if you could extend the VLANs, it will be a lot easier. Then it's very simple. You just take one VM from source to target, and it will be connected to the exactly the same network it was in the source. If not, of course, you can always change the addressing, but, I mean, if you can extend the networking configuration, it will be the easiest way to do it. So, with this, we can map the networks in the source to the networks in the target, and be able to make them equivalent. So, whenever you choose a VM that has an interface connected to a network in the source, the VM that we will create in the target will have an interface connected to the same network, well, the equivalent network in the target. If you have configured properly, it will be exactly the same network. So, this is a very simple way not to have to be changing everything every time you move a VM. This is intended for mass migration. With the storage, we do something very similar. You have your storage configuration with your data stores in your VMware environment, and you have your storage configuration with your storage classes in OpenShift. So, it's very important to select the storage in the source similar to the storage in the target. So, the storage in the source, sometimes you're using, I don't know, NFS, ISCASI, sometimes even FiberChannel, depending on the IU that you're going to require. And then in the target, you have something like CIF, for example, we have another ISCASI provider or an NFS provider that you have configured as a storage class, but you can allocate persistent volumes automatically. So, you map A to B. This data store is going to be mapped to this storage class in the target. So, whenever you start migrating, a disk is going to be created in the target, which is due to the mapping equivalent to the source. This way we map source and target, and we make it very easy to perform a mass migration. Any question so far? Okay, I keep going. Please interrupt me if there's any question coming in in the chat or if you want to ask anything. So, next step. So, we have the maps. We create the migration plan. This is where we select the VMs. Of course, we have all the ways to filter the VMs to make it easy to select the VMs that we want. In many customers that I've visited and met and worked with, they have their own naming structure. So, filtering by VM name is usually very, very common, very easy. But if you want to choose also the data center or filter by cluster, it's very easy to filter the VMs and get a set of VMs that you want to migrate together. So, let's say that you filter it, you get like 20, 25 VMs to be migrated. You select them, and then you assign the network mapping. Of course, it will check that the network in the VMs selected is in the network mappings, and it will warn you. You can choose the storage mapping. Same thing to be able to do that. You will review the plan, and you will be able to execute on it. Of course, we want to add, not in this version, but in the next one, migration automation, which is sometimes when you want to do a migration before doing the migration, you want to deactivate monitoring for that VM, or if the VM is part of a cluster that is redirected from a load balancer to detach the VM from the load balancer, or make changes in the DNS. So, you could automate all the process before the migration and then after the migration re-engage with monitoring, reattach the VM to the load balancer, or perform any changes that you would like to do it. So, this way we ensure that all the tasks that you want to do during the migration could be done. It's not going to be ready for the tutorial, but it's going to come in the next versions. And then, of course, we'll be able to monitor the migration progress. Cancel, who doesn't like a progress bar, right? So, we already include a progress bar way to monitor how things are going. And then a bit more about roadmaps, where we stand. As I said, we released last week the beta with capabilities to do mass migration, and we are preparing in May to launch the GA with warm migration. This is pretty interesting because the data in the VM will be copied without powering down the VM. And then, when you want to perform the last step of the migration, do some down the VM, copy the delta, and then power up the VM in the target to reduce the time required for the migration. Normally, for this kind of migration, there's an intervention window required. And first, we want the intervention window to be the shortest possible. And second, we want to make the most of that intervention window, which normally is not at regular times. So, we are looking forward to helping our friends, our assessments out there when they're doing their migrations, so they could make the most of their migration windows. Also, the pre-migration checks to be able to check the VMs before doing migrations to detect potential compatibility issues before migrating. What else? Well, if you have any questions, comments, contributions, any suggestions, anything you want to tell us, I mean, we have this email, migrate at redhat.com. Please use it. Please send us your questions. Please send us your suggestions. And if you have any doubt, of course, share it with us and let us know. Because, I mean, the whole team is here listening to help you. So, we have this email for you to be able to contact us. And that would be it. I mean, I'm willing to show it to you if I may. May I? Of course. We do have a couple of questions if you want to take a look. Oh, great. Tell me. So, let's go ahead and get some questions answered before you dive into the demo. I'm really excited to see the demo though. All right. So, vCenter version, 6x and above, do you support vCenter 6x and above? Yes, we support. What we test is 6x, 5x and above. And we normally, what we use underneath is VDDK from VMware. So, we behave like any other backup software. So, what we do is we connect just like any other backup software using VMware certified mechanism to do backup which is using VDDK. And we are using this VDDK to extract the data. And the current supported VDDK, it's only supported for 6.5. However, we know that this is backwards compatible and that you could use it to access any other previous version of VMware. So, you could run it, but we know, I mean, we say this is what we test and if you want to use it for something else, of course you can do it, but just letting people know what we are testing. And if they have any issues, they contact you at migrate.redhat.com or also the Convigor.io community, would that be... Yes. These are the places to contact us. You can go to the Conveyor.io community and open your your door. I was trying to open it, but it's not working right now. Seems that my DNS is not working well. So, yeah, you can go to Conveyor and just join the Slack channels that we have under Kubernetes environment. So, you could go to slack.k8s.io .com.io And in that Slack channel, I mean, there are channels that NDB, Migration Talking for Vitalization, you can join it and you can of course tell us how is it going and propose your suggestions. So, any of these channels is good to contact us. During your demo, I'll pull up the link to that Slack. Also, are you able to share storage between your target VMs that are running in Kubevert? Are you able to share storage between the target VMs running on Kubevert? So, this is more on a Perceived Vitalization question. So, I'm not completely up to date on the status of shared storage of Perceived Vitalization. So, I don't want to say something that is wrong. But I mean, you could check in the documentation of Perceived Vitalization, the official documentation and it will stay there. So, yeah. We're asking that Slack channel, right? Yeah. I'll say, let's see. I'm assuming this only works for supported VM infrastructures. Are there any limits from where the VMs can come from? Can I import from multiple types of infrastructure, for example, REV or Azure at the same time? So, we built a provider for VMware to be able to import from VMware. And we are working in building another provider for Rehab Vitalization and by the end of the year we want to work on adding another provider. But of course, if somebody wants to try to build his or her own provider for Azure, Amazon or whatever and they want to share it in the community, I mean, we'll be very, very happy to lend a hand there and to help with the provider and then once it is ready included in the downstream version of the migration token for Vitalization. Nice. Thank you. Two more questions and then we'll get to your demo and then even more questions after that. All right. What about VMware tools after migration? With all the recommendations, do you recommend cleanup of VMs prior to migration? Like cleaning up your temp files, downloads, old programs, etc? We are, in this case, we are standing on the shoulder of giants. Well, I don't know if it's giants, but we are standing on a lot of proven technology before. So there's a tool that comes with RehanderPrice Linux that is called V2V, virtual to virtual. These two was created to extract VMs from VMware and put them into QMU or for example, RehanderPrice Linux or any other QMU supported environment. So it could be used to import into REF, it could be used to import into OpenStack. And we are leveraging it to import into OpenCV virtualization. So one of the things that V2V does is that it streams the disk and while streaming the disk it removes all the VMware drivers and tools. So whenever it adds the drivers necessary for the target like the virtual drivers. So whenever the VM arrives at the target, it will be booted and it will be booted correctly because it has the right drivers. Nice, thank you. I'm going to go test out that tool myself later. I mean, that's the common inversion. If you want to have the easy to use version you could go for MTV because it's going to use that under me. Are there any benefits of using MTV over the VM import wizard available today when wanting to import just a single VM? Yeah, one of the things we are planning to do and I think it's going to be done and I'm pretty sure that that's going to be on schedule is that this migration token for virtualization is going to supersede the import tool. The import tool is just for you to test one VM to import it and this code in the import tool that we are leveraging for the migration token for virtualization the benefit is that you can plant this. You can plant it with the least of VMs. When it goes GA you are going to be able to check that the VM doesn't have anything that will render it as unbootable or unable to be migrated before you migrate so we are going to check that and then the third benefit that we are working on delivering for GA in May is that you will be able to do a pre-copy before doing the migration so whenever you do the migration you only have to copy the delta and reduce the amount of time necessary to do that migration so these are the benefits that MTV is bringing versus the tool that comes with OpenShift to import one single VM. Awesome, thank you. Let's see your demo and then we'll get back to some more questions. Cool, so OpenShift OpenShift Virtualization so you have your OpenShift instance it's OpenShift Virtualization supported on bare metal nodes so you will need some bare metal nodes to have it supported although you could enable nested virtualization like I do here so things are going to go a bit slowly because we're going to be using nested virtualization but I expect this to work properly so this is our lab environment this is OpenShift 4.7 as you can see here this is the supported version I can go to the installed operators and choose all projects and I will see that I have OpenShift Virtualization operator installed and configured so you have this OpenShift Virtualization version 260 is the one we are testing on so if you want to run on a tested environment then you should run Migration Token for Virtualization on top of OpenShift Virtualization 260 so what do you need to do you can install Migration Token for Virtualization the operator and then you will be able to use it so how do you use it well I just can go this when you install it you get like a project created OpenShift-RHMTB and if you go to networking for this project and you check on routes there's a published route which is the interface to the Migration Token for Virtualization which I have it open here so I let it load this is the interface so it's pretty straightforward once I complete the migration I could do a quick demo on how to install it so I could go here and get it started and this is like just deployed ok so first thing I need to login I need to get the credentials you have to login as a cluster administrator so give me a second I'm going to gather my credentials please ok yeah I'm gathering my credentials give me a second please please please login I'm login in and I'm going to share my screen again so sharing my screen in 3 2 1 so I login here I'm in Spain Boston expect some delays while running this demo but I mean I've run it a couple of times and it worked well so I can get it started I see the providers this is the provider where the operator was installed and instantiated and it has found 7 storage classes and it's completely ready so I could add a provider now and I could select VMware and just provide a name to it vCenter and then provide a host name so our vCenter host name provide a username so administrator at vSphere local then the password then the fingerprint just to ensure that we're connected to the right VMware provider and we're not connected to something else so once we do that to VMware I could go to providers VMware it's going to check if everything's okay it's gathering the data you see 2 clusters here 2 hosts 56 p.m. 13 networks for data stores and now it's ready so we have the provider ready the target provider sorry the source provider we have the target provider both of them ready so now we could create the mappings I could go and create network mapping so I create the mapping and name it mapping network because I'm very original so I choose mapping network I choose the provider the source and the target and now I have to choose the network equivalence so I go on the source network and I choose the VM network and then on the target I select the port network and this is going to be my mapping the VMs are going to be attached to the VM network and this is going to be reattached afterwards to the port network okay so I can just create the mapping and this is the mapping that I have created it's completely available for as many migration plans as I want to to use then I could go to storage and create a mapping same thing and then I create the mapping storage mapping select the provider B-center select the target post and then I know that my VM is running on the NFS data store but I mean I could map the other ones and I want to use the storage cluster self-RBD because I'm using OpenCIF Container Storage here so it's properly distributed, it's over defined and it works really well so I could create this map and have the two maps ready now let's migrate I go to migration plans I create a migration plan and I give it a name so I'm going to call it MTV plan on description MTV I select the source provider B-center I select the target provider this host and I love this I mean you could select a namespace and all the ones that are created I could type one MTV migrate okay and if I click here it will create for me this namespace okay good next then I'm going to filter the VMs I'm going to choose this cluster which my VMs are running and then I go into click next and then it's getting all the list of VMs so there's a lot of people working here I'm going to filter the VMs by my name and there we have we have this Relate VM that is running that I'm going to migrate I chose a small VM to make this migration quick so we could see it happening so I select this VM I could select like 20 VMs if I wanted to no problem with that or 30 or 100 and then I choose the network mapping select this one next I choose the storage mapping you see I could create a new storage mapping in case I was missing it here next and then I review the result these are the results I'm going to migrate to only one VM I could migrate 100 but I'm going to migrate only one and these are the mappings and this is the plan and I can click finish and then everything is ready to be migrated right so I could click start and the migration will begin one of the things we are planning to add is to be able to schedule this process so you could say okay let's run it at 3 o'clock in the morning now that I've been running it like 20 times I'm completely sure that it's running well so let's get it running this is the progress bar of the number of the VMs migrated so in this case it's only one so it's going to go from this to green and but we could check here if we go to the details that right now is the transfer disk phase and it's copying the first gigabyte of data out of 9 so it's going to be copying and streaming the data and then it will convert the image to Qtvert doing all these transformations about the drivers that I mentioned and cleaning up tools so when completed it will be totally done so this is now running I could go to OpenShift I could go to Overview Projects and then select the project that I just typed MTV Migrate and this is the project that has been created for me, it wasn't here before I could click on Details Workloads and then a VM will be created here in the workloads and I will be able to check on it let's give it a couple of seconds, let's see how it's going, the transfer disks, it seems that the VM is already the instance is already created and now the disk is going to be attached and the network is going to be attached and this VM is going to complete the migration that will be running so this is the demo so far we have to wait for it to complete, it normally takes around 8 minutes so if you want you could shoot more questions keep going, I know I have a lot of questions but wanted to wait until the end the demo is that simple I guess I was like super simple our friends in user experience and design are working with us and are making things super easy super easy to understand very well located and then the engineering team is focusing on making this robust as possible we end up with these tools as you see very simple and very reliable amazing demo I just wanted to keep going so one thing that I was wondering is at the beginning you mentioned analyzing your applications using the tool kit, the migration tool kit for applications and then using the MTB so how did those line up together how does your planning that's a great question I don't know who mentioned it but I love it look we have here it's called forklift let's say that this is MTB and here we have windup this is currently MTA so these are the two tools that we have available so in case you want to replat from machine to Kubernetes you could use MTB and let's say okay I'm moving 20 vms with jbos enterprise application platform to containers and I want to turn those applications into into something apply a stronger pattern and be able to turn it into microservices or I have them running in webis here or Tomcat and I want to put them in containers so there are a set of paths that MTA covers the migration tool kit for applications and what MTA is going to do is analyze the application at the application level but what you can do is that you take the VM as it is in VMware and bring it to open ship and then you already have all the developers working in open ship with an environment built for developers that developers enjoy and understand you can manage it in a way in which you could run it for cloud native applications but with virtual machines to make the transition even smoother so once the VMs with your web logic let's say they are running on your open ship virtualization you will be able to take those applications and analyze them and the thing is that what MTA does is pretty simple you know and I can run it for you I have it here so you just have to give it an application and it will analyze it and it will tell you what do you need to change in the application to do the migration so these are the two ways you could improve or modernize your current status of your application portfolio with OpenShift and with the tools that we provide with MTA and with MTV and there are two different paths one of them is as I say focus on applications on improving the application and bringing it into a container environment into a cloud native world as fast as possible which is not normally a fast and easy way to do it and the other one is just lifting and shifting the VMs to have all the applications in the environment that your developers want to use so for that I mean you could go to red.ht slash MTA and download it this is version 5.1 so far so you just download the zip file and zip it, run the script as I just did and then you will be able to run the migration type of applications analyze your applications and then this is what you will see and you'll get into demos at the price of one so this is an application that I analyzed that is completely ready to run on JvoCAP whereas this other one has some web logic proprietary artifacts that need to be changed I could dig down into this application check the issues and be able to check it, look I'm using the web logic proprietary logger and you need to change it so these are the two paths that you could follow and normally you follow first migrating the VMs then migrating the applications although sometimes it makes more sense to do the application migration directly depending on the status but for that I mean you could count on the consulting team that they're always there to be able to do a discovery session and help the customers decide on what do they want to do what is the best path and how to get there did I reply to the question? Yes, that was great and then we don't always want to say hey you have to use consulting right? No, I mean there are a lot of partners helping our customers doing these migrations and these partners are as I say they are like very skilled because they have been doing it for like a lot of time and I mean it's a matter of the customer choosing how do they want to do this modernization and how to get to this journey to the open hybrid cloud to provide a better service for their own clients and to be able sometimes to expand their addressable market and be able to grow so yeah if we could help our customers in that way we'll be willing to do it Thank you I wanted to point out a great comment in the chat a staggered from a little bit earlier a staggered schedule would be nice too if I'm migrating 100 VMs Yeah Yeah my colleague Fabien who is the engineering manager is going to love hearing this because he's already thinking about how to establish some throttling and how to make this a scale to be able to move like a large number of VMs we're thinking like hundreds so yes it's something that is in our minds but right now we have just released the beta and we keep adding features and that is one of the things that we keep in our minds to do for the future versions of the migration token revitalization Thanks for the comment by the way So and then there's are there any issues if the VMware environment is using Vvolves Vvolves Yeah the Vvolves so far what we have tested if they behave there are like some coroner cases there in which we find some issues when obtaining the data from the Vvolves but it's very likely to happen because again the way we are extracting the data from the VM is the same way a backup solution would do and VMware wants their backup vendors to work well with their Vvolves so normally if there's an issue extracting data with a very weird coroner case or it's because there's something wrong with the implementation of the Vvolves because we are using the toolkit that VMware provides to be able to extract the data from VMware so we should be safe So I know I mean myself I feel that panic if I hear it should be safe right especially when you're talking about data so Yeah, yeah, yeah, no, no, no, you're right I mean that's why we follow the safest path but for this I behave like an engineer you know if I haven't tested like one million times I'm not going to say it's safe but but I mean it's as reliable as any other backup tool So I was going to ask how how do if there is an issue do you see the error right away how are you notified of that coroner case we are getting the I mean if doing the migration we get an error we'll see it here directly in the interface we are putting all that we can to be able to make the error messages as clear and explanatory as possible that is key to be able to perform migration because one single migration VM that is not migrated will be a problem and then what we're doing is that right now we're releasing the beta as early as possible so people could start trying it and giving us feedback on these coroner cases and the second part is that again we're working on having the log for the or the error messages as clear as possible and to be able to gather all the logs together so this is the direction we are heading of course as I said we are choosing the safest path so right now this beta what it does is that it powers down the VM and then it starts copying the data with the VM power down so the VM is in a consistent state and then it powers out the VM in the target but it doesn't remove the VM in the target so if there is any coroner case that we forgot about that we couldn't find or that we are not aware of you know you can always treat down the VM that you have this migrated and power on the VM that is on the source these ones are kept and normally what when we've done migration with customers the initial migrations during the pilot phase before we scale we keep these VMs and we do like a batch of tests to the migrated VMs to ensure that they are running perfectly and then when those tests are completed and are verified then is when we delete the source VMs so normally there's a period of time in which you keep the source VM as a way to be able to roll back just in case something didn't work as performed but so far our experience is that whenever the VMs gets completed and tough it's completed it's unless there's a misconfiguration of networking then it works as in the source and it works properly so no concern about that. Dan had a follow-up question but I think you already answered it but I'll say it anyway so what if the source VM has snapshots well that foobar the migration so if you're already bringing down the whole VM are you worried about the snapshots? This is this is something you got me I'm not completely sure the previous behavior is that we collapsed the snapshots so we didn't have any issue with the snapshots VM where it's telling all the customers look don't use the snapshots for backup so if you have a snapshots they should be able to be collapsed we are working on not having to collapse the snapshots but right now I don't know the status of this so I may need to Miguel just a quick note on that we are not deleting snapshots actually it was in IMS 1.1 and IMS 1.2 we value daily that it works without removing snapshots at all we kept on that line and the snapshots are not a problem from the VMware point of view if the VM is down we're actually it's going to use the current state of the VM as a base to do the transfer so whatever snapshots you have you keep them we're not removing them and the current state is the image that we are moving so even if you want to rollback you keep your previous snapshots so if you use them as pointing time for other rollbacks you can continue to rollback even further in the past for one migration we are creating snapshots for change block tracking but normally they shouldn't affect normal snapshots it's something that we need to verify in with real virtual machines that we kind of try to break and see how it happens but that's something in our test plan for the one migration ladies and gentlemen let me introduce you to Fabien Dupont our engineering manager for the migration tokenful visualization thanks a lot Fabien for coming to the rescue and if you want to stop sharing your screen Miguel yeah sure I mean it's going to take a bit more minutes so yes I'm going to stop sharing the screen I'm really excited that you jumped on Fabien too thanks I was listening so far you did very well thanks we have a couple more questions I saw that the distributed switch port groups during migrations those networks are not an issue correct distributed switch port groups Dan do you want to ask live or if you're they are not an issue they are considered like an import group or traditional network the main difference in our opinion between a normal network and a distributed switch or distributed port group is that with a distributed port group you don't have to configure it on every ESXi so it might be that some of the networks and the source don't exist on all ESXi when you have a traditional or legacy network but if you're using distributed switches it's going to be automatically configured for you but from our point of view it's just another network on which you can have been untagging empty parameters or whatever network configuration VMware has we already take care of them nice that answered his question also this is a great question too I was wondering this as well so Mike asks is the source after the migration well it's still alive but sleeping so we shut the VM down but the VM is not removed as Miguel explained we keep it as a backup plan if anything is wrong on the destination you can still roll back your VM is there it's an easy rollback and really fast that's why we keep them so are there any have you run into issues as you're building out the tool with network contention or has anything accidentally stayed up and now you're worried about two things live no issues no major issues one thing we've noticed is that we've seen sometimes Windows machine not really appreciating to be migrated but usually trying the same VM from a different vCenter worked we considered more that being an environment issue in the test labs we have rather than a VM or conversion issue from a network convention point of view the faster the network is the better to reduce the downtime mainly we are doing our test in PSI and it's not super fast environment because we are sharing the network with many other projects so sometimes it's quite slow and well the migration goes to the end it works it's just that you have to be patient so yeah we really advise to have a network benchmark before you start doing mass migration to have a clear assessment of what the platform can support you also don't want to crash the network and have an impact on running VMs or on backups because they are likely to use the same storage back end too so if we are two processes reading the same disk at the same time going to probably slow down the backup which is not a good idea if you want to keep your if you need to run back so general recommendations similar to doing backups don't do it over your production network right just a reminder alright so we have a couple minutes three minutes left and there are questions around OpenStack and Rev can the source be Rev or OpenStack and is there a supported migration path it will be we are walking on it so let's say that for third quarter of this year we if everything goes beautiful and wonderful we will be able to have Rev as a source and then by the end of the year we want to have OpenStack so it's in the plan it's we already are considering it other providers like Hyper-V or Nutanix are not in the plan right now however if somebody wants to contribute that we are more than willing to listen to them and to help them ramp up to be able to build it so that's as far as the roadmap or if you have a customer with some budget for engineering yeah also which may go to conveyor.io and that's conveyor k-o-n-v-e-y-o-r .io I hope I spelled that right now I am going to show you the website for a second if I may perfect yes please we just updated recently so you can have all the information here all the projects we host containers we host virtual machines we platform measures software delivery performance refactor applications to Kubernetes and meetups that we do also with content on real-world example in many cases in which well we invite people to the community that are especially in the field working with customers that could provide hints and feedback on how to perform a good migration and what to avoid and what to what are the best practices yeah all of the meetups that I've been to so far have been great I mean like you said real-world examples so definitely recommend going to those virtual meetups right now in MTV we are in beta so we are going to start rolling and whenever we have these run with customers for large migrations I mean I'm going to invite whoever is involved to be able to share the experience well thank you and we're at time and thank you again that was really great presentation great Q&A and definitely we'll have you back for a follow on happy to be here and thanks to you for inviting us Fabienne for saving me and Chris for taking care of the backend of this meeting yes thank you Chris and if you want to see us out for everybody else that has joined us on Blue Jeans Chris's