 Welcome everyone to another OpenShift Commons briefing. And today, very excited. We have Miguel here. He has been working on and with the engineering teams on these migration tool kits. And we're so excited for this migration tool kit for virtualization. It's definitely been long awaited. Miguel, do you want to introduce yourself and talk to us about migration? Sure. Well, my name is Miguel Perez-Colino. I'm a product manager. I'm the product manager for Migration to OK for Virtualization. And I'm in the modernization and migration team in Red Hat. And, well, I'm taking care of this tool that has reached the beta stage just last week. And I was like, OK, I have to go to OpenShift Commons to show it. So we are going? OK. So first things first. OpenShift containers and VMs. So the migration tool kit for virtualization is intended to move virtual machines from initially from VMware to OpenShift. And then in the future, we'll add more sources and we'll keep the same target. And the target is OpenShift virtualization. So what is this about? So this is saying containers are not virtual machines. And another saying that goes containers are Linux. And Linux is containers. So the containers is a way to isolate process. It's like a super-isolated process using kernel-limit spaces, C-groups and SE Linux in behind-the-price Linux to be able to isolate those processes very, very well. And, well, virtual machines require guest OS and an hypervisor running. But the thing is that in Linux, virtual machine is also a process. And we could encapsulate that process. And the thing is that the operating system, Linux, has the kernel of virtual machine, which is like an engine to be able to run virtual machines, which is used like in almost every public cloud for virtualization purposes. And the thing is like, well, this is very performant. Try to think what 1% increase in performance would mean to a public cloud. So try to understand that, I mean, this is like a super, super good engine for virtual machines. And it's included in Linux. It's included in behind-the-price Linux and Core OS that they use in same kernel with same KVM. So we could leverage that and all the experience we have in in Rehab Virtualization and OpenStack to be able to build virtual machine capability on OpenShift. So this is how we get to OpenShift Virtualization. What is OpenShift Virtualization? Well, we have adapted Kubernetes and OpenShift to be able to run virtual machines. So we created the project QBird two years ago, and no, three years ago, if I recall correctly, and QBird matured and became pretty solid. And then one year ago in April 2020, we released OpenShift Virtualization and now it's available to run virtual machines in OpenShift next to your containers. With all the benefits that brings being able to run something on Kubernetes, like this declarative way of deploying infrastructure, all the operational benefits with Prometheus to gather metrics and all those benefits that you know and love from OpenShift plus, the interfaces to network and storage that OpenShift has been developed for so long. So that's good, but what is this for? You know, it's all about modernization and migration. You know, customers and users, worldwide and developers, they all want to become more modern because it brings a lot of benefits. So when development teams start using containers, they become more agile, faster, they have a lower time to market, they release more frequently, in case there's an incidence, their time to restore is reduced. There are these metrics that we all know that get improved when you start working in a cloud native way. So what if we could move these workloads that would make sense to have next to containers into OpenShift and have them next to the containers so they could behave more like you are in a container and then over time, modernize them. So it's a lower friction way to bring those VMs into a container-like world and have one converge infrastructure for those critical workloads that we're going to manage and then be able to do all that low modernization process step by step during the time. So what are we thinking? What if we could automatically convert VMware images to KVME images on OpenShift? Well, that would lower the cost of migrating the workloads, okay? So that's a direct benefit of this. So what would this look like? So let's say we have these web logics with Apache Frontends and a database running on VMware. We could move them to virtual machines and then modernize, for example, the Apaches. They are pretty easy and then put those Apaches in containers or even change it for Nginx and then be able to, okay, all those Frontends make them, drag them to containers. Then we could move to, for example, take some application from WebLogic to Jboss or keep them in WebLogic on containers. That's something you could also do on OpenShift running WebLogic in containers, same as WebAsphere, Jboss. But in this case, let's think, okay, we're going to move this application to Jboss. Let's modernize the application. Let's make it a leaner and more standards-oriented so it can run on Jboss. So we are modernizing that and then we could end up modernizing all the applications and even moving the database into containers at some point in time. So this makes a very easy iterative approach to modernization. It's a really good way for you to be able to, as I say, first shift the workloads into virtual machines and then do the modernization at your own pace. How do we approach this? Well, you have the workloads and there are some strategic workloads and some not strategic workloads that you could analyze with assess with Pathfinder and analyze with migration toolkit for applications. And well, what can you do? You could reply from them as VMs, you know? So you just move them whenever they are VMs, as VMs, whenever they are containers as containers, you could refactor them and repackage them as containers. So you modernize the application, you could repurchase if it's a third-party application and then put them in OpenShift project. So you test them in OpenShift, they're good, they're not good, we fix them, we test it, they're good, once they are good, you deploy to production in OpenShift and then you're modernizing. What if we want to further enhance? Well, we could go to the refactor loop again and improve, improve, improve. So this is, for example, a pattern that we have seen with large monoliths that you split them and you modernize them, modernize them, modernize them. And then modernization is complete. What do you do with the non-strategic workloads? Well, retire, rehost, retain. So some of them you could retire them, some of them you could rehost them, some of them you could retain. The retire has like a second fold, like for example, if you're running your own email process, probably you want to move it as software as a service. So there are several options for the non-strategic workloads that could be provided as a service that you could consider working on. So we are working mostly on the replatform, refactor, and report this, focused on replatform and refactor. For migration token for visualization, it's fully replatformed to move VMs from VMware to OpenShift revitalization. What are the benefits and the operating costs? Well, you see there are more operating costs in a rehost to retain because you have to keep your current infrastructure. If you retire, it's pretty easy, but you lose some services. And then when you do replatform, there's more business benefit. And when you do replatform, there are even more business benefits that you could obtain by doing this one. So what do we do in my team? Well, we have tools for these cases. In rehost, we have a tool called MTC Migration Token for Containers that the upstream project is called Crane. Then for the replatform, we could move containers using move to queue that will bring the containers running on Cloud Foundry to OpenShift and Forklift, which is the upstream for the migration token for revitalization to move VMs into OpenShift revitalization as VMs. And then the refactor, you have the Pathfinder and Windup projects that will result into the migration token for applications. These are the migration toolkits. And then you will be able to analyze and assess the applications to be able to first with the assessment, be able to choose which applications to work first. And then with the analysis tool, be able to start transforming them to put them in containers. How about the tool? Any questions so far? Is there anything on chat? And not yet, this is awesome. Keep going, have lots of... Okay, I keep going. So what I said, we have these projects in the upstream. This upstream is conveyor.io project. I really suggest you to visit it. If you go to github.com slash conveyor, you will find all the projects that we put in there. Some of them were still not migrated. This is like pretty fresh, pretty new. So with the desert to go there, there are like mailing lists, there are forums, and we even have some meetups to show, like the inner links and all the technical stuff on the projects and be able to help everybody to join and be able to contribute. So you see this project crane, porcliffe and tackle. Crane, then we have the downstream, which is the tool that we provide the migration token for containers to migrate from OpenShift 3 to 4 and also from 4 to 4. So this case that you have a cluster that is getting full and you want to move some applications outside of it and your pipelines are not that easy to repurpose, then you could use MTC easily to move those containers and their persistent volumes easily from one cluster to another. Then you have tackle, which becomes the migration token for applications. As I said, to assess and analyze applications. So you could analyze the applications, the applications with this tool, and it will tell you, okay, you have these things in the application, like for example, you're using a proprietary logger or you have Windows specific paths or you're using a proprietary class from the Oracle JDK and you want to move to OpenJDK. Well, you can use MTA and the migration token for applications for that. And what I'm going to talk about is the migration token for visualization. So what does it do? It's prepared to do migration and scale of virtual machines to open-safe visualization. So we have built tools before to do migration and scale and we have used those tools before to do migration and scale. So thousands of VMs have been migrated. And now we are building this tool with all the lessons learned from the previous tools that we've built, but with the target for, that the target is going to be open-safe visualization. So you can mass migrate virtual machines to open-safe visualization. Where are we now? Well, the beta is out. So it's very easy for you if you are an open-safe to be able to install it and I will demo it in some minutes. About the architecture, everything open-safe, everything container native, everything in containers. So we are using all the natives that we can. You see that we have a source which in this case is VMware BSphere. And during this year, we'll be adding revitalization and open-stack sources. So just in case you want to move VMs from these sources to open-safe visualization. We have an inventory service that is going to gather all the information from VMware BSphere. And we have a validation service that is going to check, okay, how is this VM configured? And it's going to run checks there. And if something is not right, it will raise it and will say, hey, I found these that could be an inconvenience to migrate this VM into open-safe visualization. So you do not start a migration that could fail, okay? So maybe you need to check things like road device mappings that are attached to the VM and that you want to keep as roadways mappings or that two VMs are sharing the disk and you don't want to end up with two VMs with two disks but two VMs with one single disk that is attached to both of them. These kind of things are the ones that are checked before. So the migration is run as smoothly as possible. So we are adding more and more rules to the validation service to ensure that when you migrate a VM, it's going to be as successful as possible. Then we have the user interface, of course, built with pattern fly for. I love the pattern fly project. It makes our interfaces look so nicely. And the thing is that we try to make it as simple and nice as possible to be able to be used, even if it's powerful, try to make it really simple and nice to use. What do we have in there? We have mappings to be able to map resources from source to target. We have migration plans to be able to say, okay, which VMs are going to be migrated in the same batch. And then we have the migration run to execute the migration. And then, of course, there's the controller and then there's the capability to import VMs in offensive utilization that we leverage to be able to move it. And then, of course, the import operator that is handled by it. So this is the architecture. If you want to have another session with more technical details, we can invite my friend Fabien Dupont and we could have another session to talk about the internals. What else? Okay, providers. First thing, we need to connect source and target. So we have a provider that is the sources, right now, VMware vSphere. And we have the target that is offensive utilization. So you have to connect the tool to the provider, which would be VMware vSphere provided credentials and also to the target. When you deploy a migration token for visualization, the OpenShift instance in which you deploy it, it gets configured automatically as a target. So very easy. If you want to do a simple migration, it's going to be very straightforward for you. So we use the sources, the destination. So we have, okay, this is from where to where. And now how? How do we change what is already there? So normally, what you have in the source is a set of configured networks. They normally attach to VLANs, depending on how you configure it, but it's pretty common that you have a certain set of VLANs that you attach to your virtualization network that one of them is, for example, to access a storage. Another one is for administration. Another one is internal. Another one is the DMC to be able to publish services outside of your environment. So these are the network mappings, the network configuration that you have in your source, and you have to create the mappings. Now you have to do it. So what you do is that you deploy your new environment, your OpenShift environment, and you configure these networks in OpenShift. So once you have configured the networks, if you could extend the VLANs, it will be a lot easier. Then it's very simple. You just take one VM from source to target and it will be connected to exactly the same network it was in the source. If not, of course, you can always change the addressing, but I mean, if you can extend the networking configuration, it will be the easiest way to do it. So with this, we can map the networks in the source to the networks in the target and be able to make them equivalent. So whenever you choose a VM that has an interface connected to a network in the source, the VM that we will create in the target will have an interface connected to the same network, well, the equivalent network in the target. If you have configured properly, it will be exactly the same network. So this is a very simple way not to have to be changing everything every time you move a VM. This is intended for mass migration. With the storage, we do something very similar. You have your storage configuration with your data stores in your VMware environment and you have your storage configuration with your storage classes in opens here. So it's very important to select the storage in the source similar to the storage in the target. So the storage in the source, sometimes you're using, I don't know, NFS, ISCASI, sometimes even fiber channel depending on the IU that you're going to require. And then in the target, you have something like CIF, for example, we have another ISCASI provider or an NFS provider that you have configured as a storage class. So you can allocate persistent volumes automatically. So do map A to B, this data store is going to be mapped to this storage class in the target. So whenever you start migrating, a disk is going to be created in the target which is due to the mapping equivalent to the source. This way we map source and target and we make it very easy to perform a mass migration. Any questions so far? Okay, I keep going, please interrupt me if there's any question coming in the chat or if you want to ask anything. So next step, so we have the maps, we create the migration plan. This is where we select the VMs. Of course, we have all the ways to filter the VMs to make it easy to select the VMs that we want. In many customers that I've visited and met and work with, they have their own name in the structure. So filtering by VM name is usually very common, very easy but if you want to choose also the data center or filter by cluster is very easy to filter the VMs and get a set of VMs that you want to migrate together. So let's say that you filter it, you got like 20, 25 VMs to be migrated, you select them and then you assign the network mapping. Of course it will check that the network in the VM selected is in the network mappings and it will not, it will warn you. You can choose the storage mapping, same thing to be able to do that. You will review the plan and you will be able to execute on it. Of course, we want to add not in this version but in the next one, migration automation which is sometimes when you want to do a migration before doing the migration, you want to deactivate monitoring for that VM or if the VM is part of a cluster that is redirected from a load balancer to detach the VM from the load balancer or make changes in the DNS. So you could automate all the process before the migration and then after the migration re-engage with monitoring we attach the VM to the load balancer or perform any changes that you would like to do it. So this way we ensure that all the tasks that you want to do during the migration could be done. It's not going to be ready for the tutorial but it's going to come in the next version. And then of course we'll be able to monitor the migration progress, cancel who doesn't like a progress bar, right? So we already include a progress bar of way to monitor how things are going. And then a bit more about roadmaps where we stand. As I said, we released last week the beta with capabilities to do mass migration and we are preparing in May to launch the GA with warm migration. This is pretty interesting because the data in the VM will be copied without powering down the VM. And then when you want to perform the last step of the migration, do something on the VM, copy the Delta and then power up the VM in the target to reduce the time required for the migration. We normally for this kind of migration there's an intervention window required and first we want the intervention window to be the shortest possible and second we want to make the most of that intervention window which normally is not at regular times. So we are looking forward to helping our friends our assessments out there when they're doing their migrations so they could make the most of their migration windows. Also the pre-migration checks to be able to check the VMs before doing migrations to detect potential compatibility issues before migrating. What else? Well, if you have any questions, comments, contributions, any suggestions, anything you want to tell us. I mean, we have this email migrate at redhat.com. Please use it. Please send us your questions. Please send us your suggestions. And if you have any doubt, of course, share it with us and let us know because the whole team is here listening to help you. So we have this email for you to be able to contact us. And that would be it. I mean, I'm willing to show it to you if I may. May I? Of course. We do have a couple of questions if you want to take a... Oh, great. Nice. Tell me. So let's go ahead and get some questions answered before you dive into the demo. I'm really excited to see the demo though. All right, so vCenter version, 6X and above, do you support? VCenter, 6X and above? Yes, we support. What we test is 6.5 and above. And we normally... What we use underneath is VDDK from VMware. So we behave like any other backup software. So what we do is we connect just like any other backup software using VMware certified mechanism to do BAPCAP, which is using BDDK. And we are using this BDDK to extract the data. And the current supported BDDK, it's only supported for 6.5. However, we know that this is a backwards compatible and that you could use it to access any other previous version of VMware. So you could run it, but we know... I mean, we say this is what we test. And if you want to use it for something else, of course you can do it, but just letting people know what we are testing. And if they have any issues, they contact you at migrateatredhat.com or... Yes. Also the Conveyor.io community, would that be... Yes, these are the places to contact us. You can go to the conveyor.io community and open your... I was trying to open it, but it's not working. Right now, seems that my DNS is not working well. So yeah, you can go to Conveyor and just join the Slack channels that we have on the Kubernetes environment. So you could go to slack.k8s.io.com.io. And in that Slack channel, I mean, there are channels that say MTV Migration and Togepropitalization. You can join it and you can, of course, tell us there how is it going and propose your suggestions. So any of these channels is good to contact us. Nice. During your demo, I'll pull up the link to that Slack. Also, are you able to share storage between your target VMs that are running in Kubevert? Are you able to share storage between the target VMs running on Kubevert? Okay, so this is more an opposite virtualization question. So I'm not completely up to date on the status of shared storage of opposite virtualization, so I don't want to say something that is wrong. But I mean, you could check in the documentation of opposite virtualization, the official documentation, and it will stay there. So... Or ask for that Slack channel. Right? Yeah. In the Slack channel. I'll say, let's see. I'm assuming this only works for supported VM infrastructures. Are there any limits from where the VMs can come from? Can I import from multiple types of infrastructure? For example, REV or Azure at the same time? So we built a provider for VMware to be able to import from VMware and we are working in building another provider for virtualization. And by the end of the year, we want to work on adding another provider. But of course, if somebody wants to try to build his or her own provider for Azure, Amazon or whatever, and they want to share it in the community, I mean, we'll be very, very happy to lend a hand there and to help with the provider. And then once it is ready, include it in the downstream version of the migration token for virtualization. Nice, thank you. Two more questions and then we'll get to your demo and then even more questions after that. All right, what about VMware tools after migration? With all the recommendations, do you recommend cleanup of VMs prior to migration? Like cleaning up your temp files, downloads, old programs, et cetera? We are, in this case, we are standing on the shoulder of giants. Well, I don't know if it's giants, but we are standing on a lot of proven technology before. So there's a tool that comes with Rahander-Price Linux that is called V2V, virtual to virtual. This tool was created to extract VMs from VMware and put them into QMU, or for example, Rahander-Price Linux or any other QMU supported environment. So it could be used to import into REF, it could be used to import into OpenStack. And we are leveraging it to import into OpenStack virtualization. So one of the things that V2V does is that it streams the disk and while streaming the disk, it removes all the VMware drivers and tools. So whenever it, and adds the drivers necessary for the target, like the virtual drivers. So whenever the VM arrives at the target, it will be booted and it will be booted correctly because it has the right drivers. Nice, thank you. I'm going to go test out that tool myself later. I mean, that's the common in version. If you want to have the easy to use version, you could go for MTV because it's going to use that underneath. Are there any benefits of using MTV over the VM import wizard available today when wanting to import just a single VM? Yeah, one of the things we are planning to do and I think it's going to be done, I'm pretty sure that that's going to be on schedule is that this migration token for virtualization is going to supersede the import tool. So the import tool is just for you to test one VM to import it and there's code in the import tool that we are leveraging for the migration token for virtualization. The benefit is that you can plan this. You can plan it with a list of VMs. The other benefits that you're going to, whenever it goes GA, you're going to be able to check that the VM doesn't have anything that will make it, that will render it as unbootable or unable to be migrated before you migrate. So we're going to check that. And then the third benefit that we are working on delivering for GA in May is that you will be able to do a pre-copy before doing the migration. So whenever you do the migration, you only have to copy the delta and reduce the amount of time necessary to do that migration. So these are the benefits that MTV is bringing versus the tool that comes with OpenShift to import one single VM. Awesome, thank you. All right, let's see your demo and then we'll get back to some more questions. Cool. So OpenShift, OpenShift Virtualization, right? So you have your OpenShift instance and it's OpenShift Virtualization supported on bare metal nodes. So you will need some bare metal nodes to have it supported, although you could enable nested virtualization like I do here. So things are going to go a bit slowly because we're going to be using nested virtualization but I expect this to work properly. So this is our lab environment. This is OpenShift 4.7, as you can see here. This is the supported version. I can go to the installed operators and choose all projects. And I will see that I'll have OpenShift Virtualization operator installed and configured. So you have this OpenShift Virtualization version 260 is the one we are testing on. So if you want to run on a tested environment, then you should run migration token for virtualization on top of OpenShift Virtualization 260. So what do you need to do? You can install migration token for virtualization, the operator, and then you will be able to use it. So how do you use it? Well, I just can go this, when you start it, you get like a project created, OpenShift-RHMTB. And if you go to networking for this project and you check on routes, there's a published route, which is the interface to the migration token for virtualization, which I have it open here. So I've let it load. This is the interface, okay? So it's pretty straightforward. Once I've completed the migration, I could do a quick demo on how to install it. So I could go here and get started. This is like just deployed, okay? So first thing I need to log in. I need to get the credentials. You have to log in as a cluster administrator. So give me a second that I'm going to gather my credentials, please. Okay. Yeah, I'm gathering my credentials. Give me a second, please, please, please. Log in, I'm logging in and I'm going to share my screen again. So sharing my screen in three, two, one. So I log in here. Is logging in? Okay, I'm in Spain and the cluster is in Boston. Expect some delays while running this demo. But, I mean, I've run it a couple of times and it worked well. So I can get started. I see the providers. This is the provider where the operator was installed and instantiated and it has found seven storage classes and it's completely ready. So I could add the provider now and I could select VMware and just provide a name to it, vcenter and then provide a host name, so our vcenter host name, provide a username, so administrator at vSphere local, then the password, I type the password, then the fingerprint. This is to ensure that we're connecting to the right VMware provider and we're not connecting to something else. So once we do that, it's going to connect to VMware. I could go to provide this VMware. It's going to check. If everything's okay, it's gathering the data. You see two clusters here, two hosts, 56 p.m., 13 networks, four data stores and now it's ready. So we have the provider ready, the target provider, sorry, the source provider and we have the target provider, obviously virtualization, both of them ready. So now we could create the mappings. I could go and create network mapping. So I create the mapping and I name it, you know, name it mapping network because I'm very original. So I choose mapping network. I choose the provider, the source and the target and I have to choose the network equivalences. So I go on the source network and I choose the VM network and then on the target, I select the port network and this is going to be my mapping. All my VMs are going to be attached to the VM network and this is going to be reattached afterwards to the port network, okay? So I can just create the mapping, okay? And this is the mapping that I have created. It's completely available for as many migration plans as I want to use. Then I could go to storage and create a mapping, same thing. And then I create the mapping storage mapping, select the provider, be center, select the target post and then I know that my VM is running on the NFS data store, but I mean, I could map the other ones and I want to use the storage cluster self-RPD because I'm using offensive container storage here. So it's properly distributed, it's over defined and it works really well. So I could create this map and I have the two maps ready. Now let's migrate. I go to migration plans. I create a migration plan and I give it a name. So I'm going to call it MTV plan and description MTV. I select, I select the source provider, be center. I select the target provider, this host and I love this. I mean, you could select a namespace and all the ones that are created or I could type one MTV migrate. Okay, and if I click here it will create for me this namespace. Okay, good, next. Then I'm going to filter the VMs. I'm going to choose this cluster which my VMs are running. And then I go into click next and then it's getting all the list of VMs. So there's a lot of people working here. I'm going to filter the VMs by my name and there we have. We have this relate VM that is running that I'm going to migrate. I chose a small VM to make this migration quick so we could see it happening. So I select this VM. I could select like 20 VMs if I wanted to. No problem with that or 30 or 100. And then I choose the network mapping. Select this one. Next, I choose the storage mapping. You see, I could create a new storage mapping in case I was missing it here. Next, and then I review the result. These are the results. I'm going to migrate only one VM. I could migrate 100 but I'm going to migrate only one. And these are the mappings and this is the plan and I can click finish. And then everything's ready to be migrated, right? So I could click start and the migration will begin. One of the things we are planning to add is to be able to schedule this process so you could say, okay, let's run it at three o'clock in the morning. Now that I've been running it like 20 times I'm completely sure that is running well. So let's get it running. This is the progress bar of the number of the VMs migrated. So in this case it's only one so it's going to go from this to green. And but we could check here if we go to the details that right now is in the transfer disk phase and it's copying the first gigabyte of data out of nine. So it's going to be copying and streaming the data and then it will convert the image to Kupferd doing all these transformations about the drivers that I mentioned and cleaning up tools. So when we're completed it will be totally done. So this is now running. I could go to up and shift. I could go to overview, projects and then select the project that I just typed MTV Migrate and this is the project that has been created for me. It wasn't here before. I could click on details, workloads and then a VM will be created here in the workloads and I will be able to check on it. Let's give it a couple of seconds. Let's see how is this going? The transfer and disks. But it seems that this, you see the VMs already, the instance is already created and now a disk is going to be attached and a network is going to be attached and this VM is going to complete the migration that will be running. So this is the demo so far. We have to wait for it to complete. It normally takes around eight minutes. So if you want, you could shoot more questions. Wang, I know I have a lot of questions but wanted to wait until the end. The demo itself, yeah, it's that simple. I guess I was like... It's like super, super simple. I mean, our friends in user experience and design are working with us and are making things like super easy, super easy to understand and very well located and then the engineering team is focusing on making this as robust as possible. So we end up with these tools as you see like very simple and very reliable. So amazing demo. I mean, I just, I wanted it to keep going but so one thing that I was wondering is at the beginning you mentioned analyzing your applications using the toolkit, right? The migration toolkit for applications and then using the MTV. So how did those line up together? How does your planning go? That's a great question. I don't know who mentioned it but I love it. Look, we have in here it's called Forklift. This is, let's say that this is MTV and in here we have wind up, this is currently MTA. So these are the two tools that we have available. So in case you want to read platform and bring virtual machines to Kubernetes, you could use MTV. And let's say, okay, I'm moving 20 VMs with JBoss Enterprise Application Platform to containers and I want to turn those applications into something, apply a stronger pattern and be able to turn it into microservices or I have them running in WebAsere or Tomcat and I want to put them in containers. So there are a set of paths that MTA covers, the migration toolkit for applications and what MTA is going to do is analyze the application at the application level. But what you can do is that you take the VM as it is in VMware and bring it to OpenShift. And then you already have older developers walking in OpenShift with an environment built for developers that developers enjoy and understand. You can manage it in a way which you could manage all your environment for cloud native applications but with virtual machines to make the transition even smoother. So once the VMs with your WebLogic, let's say they are running on your OpenShift virtualization, you will be able to take those applications and analyze them. And the thing is that what MTA does is pretty simple, and I can run it for you, I have it here. So you just have to give it an application and it will analyze it and it will tell you what do you need to change in the application to do the migration. So these are the two ways you could improve or modernize your current status of your application portfolio with OpenShift and with the tools that we provide with MTA and with MTV. And there are two different paths. One of them is, as I say, focus on applications, on improving the application and bringing it into a container environment to a cloud native world as fast as possible, which is not normally a fast and easy way to do it. And the other one is just lifting and shifting the VMs to half all the applications in the environment that your developers want to use. So for that, I mean, you could go to red.htslashMTA and download it. This is version 5.1 so far. So you just download the zip file and zip it, run the script as I just did, and then you will be able to run the migration type of applications, analyze your applications, and then this is what you will see. And you're getting two demos at the price of one. So this is an application that I analyzed that is completely ready to run on JvoCAP, whereas this other one has some WebLogic proprietary artifacts that need to be changed. I could dig down into this application, check the issues and be able to check it. Look, I'm using the WebLogic proprietary logger and you need to change it. So these are the two paths that you could follow. And normally you follow first migrating the VMs then migrating the applications, although sometimes it makes more sense to do the application migration directly depending on the status. But for that, I mean, you could count on our consulting team that they're always there to be able to do a discovery session and help the customers decide on what do they want to do, what is the best path and how to get there. Did I reply to the question? Yes, yes, that was great. And then we don't always want to say, hey, you have to use consulting, right? No, no, I mean, there are a lot of partners helping our customers doing these migrations too. And these partners are, as I say, they are like very skilled because they have been doing it for like a lot of time. And I mean, it's a matter of the customer choosing how do they want to do this modernization and how to get to this journey to the open hybrid cloud to provide a better service for their own clients and to be able sometimes to expand their addressable market and be able to grow faster. So yeah, if we could help our customers in that way, we're always willing to do it. Thanks. I wanted to point out a great comment in the chat. A staggered from a little bit earlier, a staggered schedule would be nice too if I'm migrating 100 VMs. Yeah, that's... Yeah, my colleague Fabien, who is the engineer manager, is going to love hearing this because he's already thinking about how to establish some throttling and how to make this a scale to be able to move like a large number of VMs, we're thinking like hundreds. So yes, it's something that is in our minds, but right now we have just released the beta and we keep adding features and that is one of the things that we keep in our minds to do for the future versions of the Migration Token for Vitalization. Thanks for the comment, by the way. And then there's, are there any issues if the VMware environment is using Vvolves? Vvolves. Yeah, the Vvolves so far, what we have tested, if they behave, there are like some corner cases there in which we find some issues when obtaining the data from the Vvolves, but it's very unlikely to happen. Because again, the way we are extracting the data from the VM is the same way a backup solution would do. And VMware wants their backup vendors to work well with their Vvolves. So normally if there's an issue structing data with a very weird corner case or is because there's something wrong with the implementation but of the Vvolves, because we are using the toolkit that the VMware provides to be able to extract the data from VMware. So we should be safe. So I know, I mean, myself, I feel that panic if I hear it should be safe, right? Especially when you're talking about data. So... Yeah, yeah, yeah. No, no, no, no, you're right. I mean, that's why we follow the safest path. But for this, I behave like an engineer. You know, if I haven't tested like 1 million times I'm not going to say it's safe. But I mean, it's as reliable as any other backup tool. Which is... So I was gonna ask how do, if there is an issue do you see the error right away? How are you notified of that corner case? We are getting the, I mean, if doing the migration we get an error, we'll see it here directly in the interface. We are putting all that we can to be able to make the error messages as clear and explanatory as possible. That is key to be able to perform migration because one single migration VM that is not migrated will be a problem. And then what we're doing is that right now we're releasing the beta as early as possible. So people could start trying it and giving us feedback on these corner cases. And the second part is that, again, we're working on having the log or the error messages as clear as possible and to be able to gather all the logs together. So this is the direction we are heading. Of course, as I said, we are choosing the safest path. So right now this beta, what it does is that it powers down the VM and then it starts copying the data with the VM power down. So the VM is in a consistent state and then it powers out the VM in the target but it doesn't remove the VM in the source. So if there is any corner case that we forgot about that we couldn't find or that we are not aware of, you know, you can always choose down the VM that you have this migrated and power on the VM that is on the source. These ones are kept and normally when we've done migration with customers, the initial migrations during the pilot phase before we scale, we keep these VMs and we do like a batch of tests to the migrated VMs to ensure that they are running perfectly. And then when those tests are completed and are verified, then it's when we delete the source VMs. So normally there's a period of time in which you keep the source VM as a way to be able to rollback just in case something didn't work as performed. But so far our experience is that whenever the VM gets completed and then it's completed, unless there's a misconfiguration of networking, then it works as in the source and it works properly. So no concern about that. Dan had a follow-up question, but I think you already answered it, but I'll say it anyway. So what if the source VM has snapshots? Will that foobar the migration? So if you're already bringing down the whole VM, are you worried about the snapshots? This is something you got me. I'm not completely sure. The previous behavior is that we collapsed the snapshots. So we didn't have any issue with the snapshots. I mean, VMware is telling all the customers, look, don't use the snapshots for backup. So if you have a snapshot, they should be able to be collapsed. We are working on not having to collapse the snapshots, but right now I don't know the status of this. So I may need to follow-up. Yes. Yeah, just a quick note on that. We are not deleting snapshots. So actually it was in IMS 1.1 and IMS 1.2, we value daily that it works without removing snapshots at all. So we kept on that line and the snapshots are not a problem. From the VMware point of view, if the VM is down, we're actually, it's going to use the current state of the VM as a base to do the transfer. So whatever snapshots you have, you keep them. We're not removing them. And the current state is the image that we are moving. So even if you want to rollback, you keep your previous snapshots. So if you use them as point in time for other rollbacks, you can continue to rollback even further in the past. For one migration, we are creating snapshots for change block tracking, but normally they shouldn't affect normal snapshots. It's something that we need to verify with real virtual machines that we kind of try to break and see how it happens. But that's something in our test plan for the one migration. Ladies and gentlemen, let me introduce you to Fabien Dupont, our engineering manager for the migration to careful visualization. Thanks a lot Fabien for coming to the rescue. And if you want to stop sharing your screen, Miguel, then you can. Sure, I mean, it's going to take a bit more, some more minutes. So yes, I'm going to stop sharing the screen. Because I'm really excited that you jumped on Fabien too. Thanks. I was listening so far, you did very well. Thanks. We have a couple more questions. I saw that the distributed switch port groups during migrations, those networks are not an issue, correct? Distributed switch port groups. Dan, do you want to ask live or if you're... They are not an issue. They are considered like any port group or traditional network. The main difference in our opinion between a normal network and a distributed v-switch or distributed port group is that with a distributed port group, you don't have to configure it on every ESX side. So it might be that some of the networks and the source don't exist on all the ESX sides when you have a traditional or legacy network. But if you're using distributed v-switches, it's going to be automatically configured for you. But from our point of view, it's just another network on which you can have VLAN tagging, empty parameters or whatever network configuration VMware has. So we already take care of them. Nice, that answered his question. Also, this is a great question too. I was wondering this as well. So Mike asks, is the source VM still alive after the migration? Well, it's still alive, but sleeping. So we shut the VM down, but the VM is not removed as we can explain. We keep it as a backup plan. So if anything is wrong on the destination, you can still rollback, your VM is there and it's an easy rollback and really fast. That's why we keep it, we keep them. So are there any, have you run into issues as you're building out the tool with network contention or has anything accidentally stayed up and now you're worried about two things live? No issues? No major issues. One thing we've noticed is that we've seen sometimes Windows machine not really appreciating to be migrated, but usually it, well, trying the same VM from a different vCenter worked. Like it's, we considered more than being an environment issue in the test labs we have rather than a VM or conversion issue on 32e. From a network convention point of view, of course, the faster the network is, the better to reduce the downtime, mainly. It's really a question of downtime. We are doing our test in PSI and it's not super fast environment because we are sharing the network with many other projects. So sometimes it's quite slow and well, the migration goes to the end. It works. It's just that you have to be patient. So yeah, we really advise to have a network benchmark before you start doing mass migration to have a clear assessment of what the platform can support. You also don't want to crash the network and have an impact on running VMs or on backups because they are likely to use the same storage backend too. So if we have two processes reading the same disk at the same time, going to like probably slow down the backup, which is not a good idea. If you want to keep your, if you need to rollback. So general recommendations similar to doing backups, don't do it over your production network, right? Yeah. That's just a reminder. All right, so we have a couple of minutes, three minutes left and there are questions around OpenStack and Rev. So can the source be Rev or OpenStack and is there a supported migration path? It will be. We're walking on it. So let's say that for third quarter of this year, we, if everything goes beautiful and wonderful, we'll be able to have Rev as a source and then by the end of the year, we want to have OpenStack. So it's in the plan. It's, we already considering it. Other providers like Hyper-V or Nutanix are not in the plan right now. However, if somebody wants to contribute that, we are more than willing to listen to them and to help them ramp up to be able to build it. So that's more or less the roadmap, yeah. Or if you have a customer with some budget for engineering. Yeah, also, which may happen. So again, go to conveyor.io and that's conveyor, K-O-N-V-E-Y-O-R.io. I hope I spelled that right. Now, I'm going to show you the website for a second if I may. Perfect, yes, please. We just updated it recently. So we can have all the information here, all the projects we host, containers we host, virtual machines, we platform, measure software delivery performance, refactor applications to Kubernetes and meetups that we do also that with content on real-world example, in many cases, in which, well, we invite people to the community that are, especially in the field, working with customers that could provide kids and feedback on how to perform a good migration and what to avoid and what are the best practices, yeah. All of the meetups that I've been to so far have been great. I mean, like you said, real-world examples. So definitely recommend, you know, going to those virtual meetups right now. Yeah, in MTV, we are in beta, so we're going to start rolling and whenever we have these run with customers for large migrations, I mean, I'm going to invite whoever is involved to be able to share the experience. Well, thank you and we're at time and thank you again. That was a really great presentation, great Q&A and definitely we'll have you back for a follow-on. Happy to be here and thanks to you for inviting us. Fabienne for saving me and Chris for taking care of the backend of this meeting. Yes, thank you, Chris. And if you want to see us out for everybody else that has joined us on BlueJeans, Chris is seeing us out on OpenShift TV on the live stream and thank you for joining us here for the live Q&A. And again, thank you, Miguel. I'm going to stop the recording and then I'm going to copy the questions so that you have them and you want to follow up in the conveyor group on the Slack or however you want to take care of the rest. Really appreciate it. Yeah, all right, cool.