 Good morning. Good afternoon. Good evening. Wherever you're hailing from welcome to a very special edition of OpenShift TV We are going to be having the conveyor community roadmap session today on the air I'm very pleased to be joined by a lot of folks that are working on a conveyor project I'm Chris short executive producer of OpenShift TV. If you've never seen me before check out openshift.tv The website and you can learn more But right now I'd like to hand it over to Marcus Naigle who will lead us through this journey Yeah, hey and welcome everyone Thanks for having me Chris. Thanks for having us Chris actually so a few words about myself I'm a senior architect with Red Hat consulting services. I'm working out of EMEA in the EMEA solutions practice and So I'm helping customers migrate to to Kubernetes in Different variations like with forklift with with crane as a daily basis. So that's my data business So I'm pretty excited to see the roadmap and I'll walk you through the Roadmap session today. Well, we have all the experts. So I'm just a Pretty dumb user of the tools, but we have all the experts and I'd like to share the Presentation now Okay So as I said the conveyor road map. So what's up and coming? First of all, if you haven't heard about the conveyor community it's a community of people passionate about moving applications and and also VMs infrastructure everything onto Kubernetes because we're passionate about Kubernetes. So please join us if you If you have time and as a user I can say Well, I'm not contributing to well one the core features for example But well just giving giving the communities and some real-life experience So that work well with the tool and we encountered this and that or Contributing on the on the slack channel That makes it easier for the community for the engineers are actually working on the tools to go in the right direction So well, please contribute to share your feedback. That's what I do Okay, so let's talk about all the tools we have in our portfolio here in the community portfolio So first of all forklift if you haven't heard about it That's a tool that helps you rehosting your virtual machines onto Kupfer So you might ask yourselves, okay, what about what's the point of having a virtual machine in Kubernetes? Well, it's the interesting thing is It's not only running in Kubernetes. It's administered as a Kubernetes object So together with your containers, which you can communicate with over the Kubernetes networks, etc So so that's really there are many good use cases and I'm using that as part as I said of my daily life Next one is crane. So if you want to migrate from Let's say I do that. Well, I work at Red Hat. So no big surprise I'm using that helping customers migrate from from OpenChip 3 to 4 because it's super straightforward Doing that. So crane is the upstream version of the migration toolkit for containers and it's It helps us migrate from from one Kubernetes cluster to another one When we talk about application modernization, I forgot to mention that I'm also one of the managers of the Red Hat internal application migration modernization community. So we're all we're using MovedCube and Tackle quite a lot. So re-platforming applications that are for example on Docker Swarm or Cloud Foundry onto Kubernetes. That's where MovedCube comes in and Already mentioned Tackle. So when we dive into the trenches of migrating and Refactoring applications to Kubernetes, that's where Tackle That's where Tackle comes in and Tackle is not just one tool. It's actually a collection of tools that are growing together So we'll hear about that and last not least when we talk to customers, okay, so how do I measure my progress? How do I measure how? How much better I've become in terms of typical software delivery metrics That's where Polaris comes in. It can do much more but Typically what we tell customers if you want to measure your success will define some some Some critical metrics and have a dashboard and that's where Polaris helps Okay, so for for a quick overview each of these each of the teams naturally has a detailed Detailed timeline what's coming when but just to give you a quick overview I won't go through each of these So as each of the of the tools to owners Community members will talk to that. So I'll first like to start with forklift So some of the cool features that I'm really looking forward to use is like for example the Pre and post migration hooks because we sometimes see okay, so when we move AVM There's some some well simple work that needs to be done But when when you're doing mass migration that can be can become a little cumbersome So migration hooks are the great solution also the the upcoming pre-migration checks, but let's Let's have the forklift team talk about these so forklift team Ding ding. Where are you? Hello, I'm Fabien Dupont. I'm part of the forklift team and the engineering team for that and I think Miguel must be Close Yes, so I introduce myself too So thanks Fabien, this is Miguel and the product manager for the forklift team Do you want to go ahead Fabien or do you want me to do it? Oh, please please do it Okay, so well we have with news we launched for lift 2.0. It's finally DA so What is what does it come with for lift 2.0? It can do migrations of course as Marcus has just explained we are moving virtual machines from a source to a target the target being Kubernetes with Qbert So if you have Qbert, you will be able to to to bring your VMs there And what we have added as the source initially is VMware vSphere and And We are also doing the pre-migration checks of the virtual machines So before to migrate we check the configuration of the virtual machine in the source things like a raw device mappings shared disks or CPU pinnings or other configuration that would require some manual intervention is checked and you're being born on it So you could perform some tasks in order to to make it migrate properly without Finding the issues afterwards. We want you to know where are the risks and where are the possible issues before you do figure it out So this is what it comes with forklift 2.0 that was launched a couple of weeks ago And now we are working on for the 2.1 And what we are doing is another source to migrate VMs One of our main targets is to add sources for VM migration And the source that we are doing is the habitabilization and overt so overt is the upstream project for the habitabilization and We want to be able to migrate the VMs that are in overt or the habitabilization To open shift and to Kubernetes with Qbert So this is our target and what else are we adding migration hooks? Why because sometimes when you want to migrate your VM that VM requires some For example change of IP addressing So you want to automate that change and for that you will need to be able to run tasks before and after migration What hooks does is provides a way to be able to run those tasks So you can provide your own container and you will be able to run those tasks in the VM Or for example, if it's attached to a load balancer or it is being monitored You will be able to deactivate the monitoring while you migrate So you don't receive 200 warnings in your mobile phone Could decouple your VM from the load balancer and then reattach to the load balancer once the migration is completed Next please And forklift 2.2 that's the target for this is the end of the year And we want to add the warm migration capability that we have right now in migrations from VMware to overt and habitabilization This warm migration capability, what it means is that you can start copying the data and it's running So you start copying the data whenever you want to migrate you do what we call the cut over So you shut down the VM, you copy only the changes that were applied to the disks and then you can start the VM And that means two things. First, you do the migration in less time with less downtime Second, you could migrate more VMs in the same intervention window so it makes things a lot easier What else are we adding? Well, we say that we are going to have hooks to run tasks pre and post migration And it's very likely that you want to build those hooks with Ansible So we're going to build a feature that will help Create the images that you're going to use to run Ansible playbooks With the Ansible hooks image builder and then attach them to the hooks pre or post migration This way it will be a lot easier to do, to perform the tasks and for example, if you are testing them to make changes to the tasks that you want to run pre or post migration To do a migration plan. What else? We will go with the pre-migration text that we already have And last but not least, we want to do, and this is for the beginning of next year, say February, March, we want to do forklift 2.3 Adding support to migrate VMs from OpenStack, basic support to start migrating VMs from Did I forget about anything here? No, I think it's correct. And from an engineering point of view, we have a few site topics around not just migrating the VMs, but helping people to reconfigure the destination platform Well, kind of migrate the configuration for network, for example, from the source destination like identifying all the demands Yeah, yeah, and things like that, where we would provide some guidance first and potentially see the traction about automating that It's not yet on the roadmap because, well, we have a lot to do already with with Rev and OpenStack Yeah, there's another one. Tell me about Opal Oh, yeah, Opal is, I'm not sure we want to use that name, it's used somewhere else, but it's actually an analytics layer on top of the inventory we have So today, we have an inventory of the different source providers, well, and destination too And the way we select the VM is very guided. You need to know which VMs you want to migrate There's no search in the inventory. So you have to know your source inventory really well before doing my question, which is fine, usually you know exactly what you want to migrate But with Opal, we have a way to explore the data. So your inventory is basically a graph and we allow you to traverse the graph in many directions. And also it has a search engine to find VMs that are alike You want to find, as you mentioned, all the VMs that are using a road device mapping because you know they all need the same kind of pre and post migration hooks. So the idea is to help you find similarities between VMs and create your migration plans from that And because we have a view over all the providers at the same time, we expect that you're going to have a consistent view on all your VMware and Rev environments and find The same issues everywhere. Sometimes you would just want to do a small change on the source configuration of the VM like you find all the VMs that have USB drive attached on the source. Of course, we cannot migrate the USB device. So you may just want to go over all these VMs and detach the device because it's not used anymore. Things like that, so that will make your life easier when trying to troubleshoot the migration. So next please. So there will be the timeline. Right. As I said before, we have launched Beta in March. We have launched in June for Cliff to the road and we plan to launch in August. For lift to the one with the breath call migration and migration hooks stood up to in November by the end of November will be more or less and then February for the next year for the basic basic support. In for Clip to the tree. So that would be it. Marcus back to you. Yeah, thank you. And thanks. Thanks for presenting. So I'm pretty excited to hear these things because there's many things. Well, we've all been waiting for and nagging you about it. So, so for the crane roadmap. Just a few things. Also, I, as a user have been have been waiting for is the namespace mapping. So if you ever moved namespaces from from one cluster to another one and you've you've had the challenge. Okay, so maybe I just want to tweak it a little. Now we can and also incremental PV migration. So so when we just want to migrate we have deployed into the new cluster using our city pipeline, but we just want to deploy the PV is basically the state of the application and do that incrementally. So these are some of the highlights that are coming but I'll leave that to the crane team to elaborate on that. So please go ahead. Thanks, Marcus. This is John Matthews. I'm an engineer manager related to crane and I'm based out of Raleigh, North Carolina. So crane one dot X. This is an effort we've been working on now for probably about two years. And this is approaching a pretty good maturity state. This is an offering for a cluster admin. So crane one dot X is all for a cluster admin. And it's to help them to migrate stateful workloads between clusters. Next month in mid July, we have a release coming out for crane one dot five. And we have the namespace mapping support in here that Marcus was talking about. And with this when we do a migration, we're going to allow a user to map the source name spaces to different namespaces on the destination and then be able to adjust during the migration. We've also been continuing to focus on improving the debug experience. Migrations can be quite complex. We're trying to make it easier in case something goes wrong. And there are two highlights I want to call it here. One of them is that we are now emitting more Kubernetes events during the migration to give more transparency of actually what's happening during that workflow. And then on the UI side, we've made a really good improvement of correlating those debug resources with live status and color coding. So you can look at one single thing and get a pretty good feel of what's the state of those resources are they in a bad state and where to put your time. We also are continuing to look into performance improvements. And one that we did for one five is to enable a cash client. Now, this is something that we don't have enabled by default just yet. We think we will put this on by default in a future release. We had a few concerns around memory consumption. So for right now, just to be safe, this is behind a future flag, but it's there for anybody wants to use it. Then later in about late September, we expect the crane 1.6 release. This release has a technical challenge for us. And one of them is that in Kubernetes 122 is going to stop serving to be one beta one version for customer resource definitions. That has an impact to us because we need to exist essentially all the way down to Kubernetes one seven through 122 or newer. Because we are establishing ourselves as a path to migrate off of these old versions of Kubernetes to the new. So it makes it a little bit harder for us here of how do we handle this so we can coexist with the newer versions as a GB case change and then also stand older versions are still compatible. And then for PV migration, we are extending our support of what we can do. So now we can start to help folks that are using the CI CD pipeline. With the CI CD pipeline, we wouldn't need to handle migrating the Kubernetes resources because that's something the CI CD pipeline would already do. But as far as the PV data, that's where we could help out for migrating the state and that's just incremental state only migration. And then the last piece on one six is that we are improving a detail of the UI. And this is more help out with a few edge cases on different platforms where it's not possible to change the course headers and the course headers. This is something that allows the JavaScript code to talk to different back ends different than essentially the route that gave us the actual JS code itself. So this is something that we saw we needed when we're talking IBM rocks and we're planning to put that in. And we're planning to use the same approach that forklift is used forklift solve this problem prior to us using their proxy. So we're looking to put that same thing in. We can go to the next one. So crane 2.0. This is all brand new R&D work that the team is heavily implementing right now and exploring crane 2.0 is essentially us taking the lessons learned from the past two years. They see what can we do better so we can approach migrations. But this time let's approach a migration from the perspective of an application owner. So big thing on crane 2.0 is that you will not have to have cluster admin access to use crane 2.0. And then the next piece of this is that when we're doing that migration, we want to help end users to embrace the best practices. Essentially, we want to help people so they have repeatable deployment after the use crane. So we're not just doing it at one time and it's done. We want to help them so the setups in the future migrations are less difficult. And we want to do that through creating an onboarding experience for get ops. So you can picture that you have an application running on the cluster. And maybe you're not quite sure how that application came to be. You don't know the pieces of it. You can point crane 2.0 at that application. We'll discover and export all the resources. Then we'll massage and make those small little tweaks that are needed sometimes so it can be repeatable and can reuse it. That can be leveraged with Argo CD later to create a get ops pipeline. Now, as we're building this, we're building this through a series of small composable tools. We're doing that so we can give more flexibility and visibility to what's happening. And while we're doing that, we're also implementing a pluggable transformation mechanism. This is pretty cool. This gives us the ability to handle small little edge cases that we've learned over the years are needed when we take an export of a resource and try to reuse it. From this, we can build up a series of knowledge in the space about even doing migrations between different Kubernetes distributions or different vendors. So say that you're going from EKS to AKS. Maybe there's a small little tweak that you have to do. We'll have different plugins that can help out for that. Then the last piece I want to mention is that we are approaching PV migrations a little bit different. We're planning to collaborate with upstream project scribe and then put in some of the features we've had for DVM back into that. This way everybody can standardize that one thing for PV migration going forward. And go next. So there's just a kind of summary of where we were. So we're going to have two releases coming out over the next few months on the Crane 1.x side. And while we're doing those, there's going to be heavy development going on for Crane 2.0. Crane 2.0, we're looking to have a release probably early next year. I just wanted to mention that on the Crane 2.0, really heavy development on all of us right now. If there's anyone in the community that has new use cases that you think would be useful, please let us know. Same thing that anybody wants to collaborate. There's a lot of different development potentials here. It's kind of a fun thing that we're looking into. Thank you, Marcus. And thank you for presenting. So that's pretty exciting. And thanks for sharing the roadmap here. And looking forward to 2.0. So for Tackle, well, for just a little explanation. So Tackle, as I said, there's many tools growing together. So the Tackle team will talk to that. So we have like, if you've been following the migration space application, migration specifically for a while. So there's Pathfinder, there's been wind up. Now we see Diva and many more features growing together. So one of the features if you've ever migrated applications and your customer or you as a customer are looking at migrating and assessing a huge number of applications. Well, you end up doing things like super advanced, super sophisticated spreadsheets. So no more spreadsheets. So we're getting the application inventory and that will be the entry point for a number of features. So pretty cool, pretty slick and I'll just let the Tackle team talk to that. So hello, everyone. I love me to introduce myself. My name is Ramon Roman-Nisen. I'm the product manager for the migration toolkit for application, which is going to become the downstream distribution of the Tackle project. So first of all, let's talk about the Tackle MVP. I think we have great news here today. So we basically have our feature freeze in place. So now we have to do some last Q&E just to make sure everything works perfectly. And I think if nothing, you know, especially harmful arises during this last cycle, we will be releasing the first version of the Tackle operator next week, probably on Wednesday. So with this MVP, we will be including the application inventory that Marcus mentioned. And also the Pathfinder application, which is an assessment application that has been around for a while that was built by Red Hat Consulting. And basically what we have done is to rebuild it from scratch using a consistent UI and have it fully integrated with the application inventory. And we will be also releasing the Tackle controls, which is basically a, let's say a backend microservice to, you know, glue it all together. So that's it for the Tackle MVP. Also, we have mentioned Windup, which is going to become a first class citizen within the Tackle universe. So we will be contributing Windup to Tackle. And while we do that, we're going to do a few upgrades. So first of all, by the end of next month, I think we will be releasing a new version of the migration toolkit for applications, including some rules for EAP 7.4, the new version of JBOSS EAP. And we will be also adding a new IntelliJ IDE plugin. We had plugins for Eclipse flavors and BS code flavors. Now we are adding another plugin for IntelliJ. And well, since we have, you know, a pretty wide landscape of, let's say projects and applications and tools within the Tackle umbrella, we are going to create a generic integration layer in Windup to have all these tools integrated within the Windup analysis flow. So that's something we're going to focus once we deliver this first version of Tackle. And once we have created this integration layer, the ADA is to have an integration with the application inventory to have a seamless user experience and being able to analyze applications straight from the application inventory. To make the experience, let's say, more cloud native, now know that we are delivering this in an operator deployed in OpenShift or in any Kubernetes distribution out there. We want to add some new sources of data to Windup. So make it capable of gathering application source code of binaries right from Git repositories or Corporate Archive repositories instead of having to feed them manually to the tool. So that's one thing. And last but not least, we want to explore integration with other migration tooling and technology. So we will be exploring things like OpenRid, Eclipse Transformer or Rebappi. So once we have this, I don't need to introduce you to Gene Xiao from IBM Research. Hi, Gene. Hey, everybody. Hi, everyone. All right. Welcome. Let me have a quick introduction myself. So I'm Gene Xiao. I'm a researcher and manager at IBM Research. Over the past two years or so, IBM Research have actively working on engaging with our enterprise client in understanding what makes application modernization very difficult. So some of you guys may have seen some of the product we released from IBM recently. So we're leveraging our experiences in applying AI to application analysis and enterprise application modernization into the tackle domain. So as a first volley of capabilities we're looking at, we want to help developers and architects address the foremost immediate concerns we have observed in our experience. So this leads to four really interesting assets we want to talk to you about. So the first work thread is all about outstanding the application from a continuation and technology compatibility and feasibility point of view. Very often when you have a set of applications, you're trying to modernize, the immediate question that gets asked is, am I able to continue the application or part of the application? Why do I need to consider in terms of the services of platforms, the data access and the resources I'm using? And what's my stance in terms, am I able to containerize those or not? So the containerization assessment, it's built all around answering these questions. So we have established a very large knowledge base built by IBM experts that's working with client on field that understand the interaction relations of technology. And then we taught our AI components and AI models how to think and reason about the same technology interactions. And as an outcome, we're providing a service that would directly engage with App Inventory, as you heard before, to provide that assessment. Over a very large potential, by large, we mean we have testing on thousands of applications in one shot to provide that intelligence in terms, okay, these are technology breakdowns. These are how they can interact. This is why they may or may not be containerized. And if you want to containerize how you might be able to break them down and what sort of different disposition, in terms of great platform, great hosting, rewriting, or just straight move, you can apply to different parts of application deployment. So this is absolutely a very exciting feature. In addition, we're looking forward to work with a new tag concept that's coming out of application inventory that will allow you to be able to manage, organize, analyze, and understand your application across different ways. Like providing combination filters and tags, that's either provided automatically by containerized license assessment or being annotated and created by you, the user. Now, the second work threat deal with is, and many of you may know, whether you're trying to modernize your application or you're trying to move your application, a foremost concern and question that come about is, I need to be able to understand what configuration, what resource dependencies, and how they're expressed in my application. Now, in particular, we're focusing on Java generation of technology today, namely JEE. Now JEE have evolved over the past two decade or more. And because of that, there's tons of frameworks out there. And depending on the frameworks and services that's used, the expression of where configuration can be done and where it sits in your code and portfolio, it's actually tremendously varied, as we have found out. So that in the past, we have done a lot of work in terms of creating rules and toolings and templates and models, some of you may be familiar with, to be able to map from particular artifacts, be it in the code, in your configuration, in your deployment descriptors, in Java, and be able to discover and then hopefully map into a set of configurations, you'll be able to rewrite into a new target deployment environment. So here we're applying AI to look at how we might be able to holistically mine and discover these informations at a speed and at a comprehensiveness that's difficult for us to manually cover. So the automated configuration discovery tool, look across the entire application code set, meaning both the code, the configuration, the deployment descriptors, and then based on the framework types, try to actually figure out and discover all these configuration services for you, both help to provide a speed up in your configuration discovery, also serve as a validation to what you may have done manually. Next please. Thank you. So on the heel of that, another question that's very pertinent to even thinking about modernization is how about my data dependencies? How is my application and my classes interacting with the databases and related objects? So this may seem somewhat manageable when the application size is a handful of classes in the range of 10s and maybe 100, but when you're dealing with application that have literally over 1000 classes or more, which is unsurprisingly pretty common for large enterprise application. This sort of interaction, as you can imagine, become fairly complex, especially if we have to understand what are the transactional properties of the data access. Transaction is one of the foremost concern in data migration because when you move from a monolithic application into a microservice deployed architecture, typically you want to be able to either be aware of or transform those transactions into something a little bit more distributed. Essentially, the idea is you want to move from asset to base and therefore understanding all these transactions today is actually tremendously difficult. So here we're applying a combination of both traditional program analysis as well as AI to be able to discover explicit expressions of database access and data tie-ins, i.e. objects to your database tables. In addition, looking at the implicit relationship that's being provided through data transactions and data dependencies. And we also, as from our side before, we'd look to be able to provide you all these information seamlessly. So we'll also look to provide all of these data seamlessly through wound-up integrations so you have a common place where all these different aspects of your data are manifested. And finally, we also look at very actively on the test driven modernization. One item we have discovered very much during the modernization journey is, depend on the age on the size of enterprise application. Very often, the test cases themselves are either going out of shape or they have reduced coverage, much less to the degree an enterprise developer would like. Now, it becomes a daunting challenge, especially with the legacy application, to actually be able to increase the test coverage and then find ways to be able to translate these tests into post-modernization tests. If you did any cold change or cold refactoring, you want to be able to do essentially differential testing in the before and after. And today, that's largely manual driven and has been fairly difficult and time consuming. So under the test driven modernization effort, we're A, in the first step, which we just released into the community actually, a set of supporting capabilities that will allow automated test case generation as a unit test case. Level and be able to provide differential testing both for the before and after, from the modern with to the refactored microservices. Coming shortly towards the end of this year, we're actively working on set of UI driven test cases as well. That would not only allow you to be able to create UI based functional test cases automatically, but also apply AI to automatically annotate what are the nature of the business functionality involved in these UI based test cases by looking at the manifestation of content and bombs on different UX pages and the journey through those UX pages. So we're very, very excited to be able to roll out these capabilities as a part of tackle solution to really give you a ramp up, an assistant and a validation on the change of work you have to do. All right, next. Thank you. So I think I have to jump in here to talk about wind up the plans that we have for wind up and the application Venturi. So last May, we released a new version of wind up including a foreign tail migration rules towards CAP XP to that was released under the MTA moniker. So yeah, there's that it's already available out there about the application inventory and but find their like I told you we expect to have it released next week, along with the first version of the tackle operator. This operator will be available on both the upstream operator operator have dot IO to be consumed by any Kubernetes distribution out there but it will be also available on the open shift flavor of operator have at the same time. So that's it for the upcoming plans in this month for Q3. We have this challenge of adapting wind up to the new situation and landscape that we have, like I told you before, and we will be adding support. Some rules to support the migration path towards EAP 7.4 and also this IntelliJ extension. Gene already talked about the different IBM tools that they are contributing to the tackle project umbrella. This generic integration layer will be targeted at first towards TCD or ACD and Diva, and also to work as an integration point between the application inventory and wind up itself. So this is some joint work we will be doing with all the IBM teams, just to design something generic enough that meets everyone's expectations. And that's a very, very cool thing. Yes, I told you before, this get out of the repository integration will be worked during this Q3 and we will do this migration to leaking exploration. We've already had some contacts with the open rewrite team. So we're really looking forward to dig deeper into this technology and see how this whole thing could fit within the wind up analysis cycle to not only be able to identify what needs to be changed, but to perform these changes automatically as well. So that's a very cool thing. And finally, by the end of the year, we expect to have this full integration, this seamless user experience between application inventory and wind up. And we're already working with, you know, for some other things out there with some GSIs that are showing their interest on tackle. So lots of very exciting features coming up. So back to you, Jean. Next slide, please. Thank you, Ramon. Yes, let's have a look at what's on the track in terms of tackle capabilities. So as of June, right now, we actually, as you can see on the roadmap, we actually have initial rollout in terms of the, both the containerization assessment. So we have onboarded and created a knowledge graphs specific to the enterprise containerization directly relating to OpenShift. That will allow you to do OpenShift migration assessment that has been onboarded. And in fact, we have now a open set of capabilities on the containerization assessment. And for ACD, we have already done the feature set around looking at Spring Boot, how that can be mapped to Quarkus configuration with wind up rules. And the diva, the initial set of database access as well as transaction discovery capabilities has also been released under the diva branch of tackle. And finally, the task driven differential testing in terms of unit tasks is to be released, I believe, next week. So we encourage and welcome other communities to try out these new capabilities. And all of these actually even today before ahead of the wind up integration, we also provide a ways to be for you to be able to actually run them standalone for the purpose you can give us early beta feedbacks such that it will help us build and prioritize our going down the line. And we encourage looking forward to involving you guys on this now going bit forward into Q3 this year from a containerization assessment point of view, the integration work is primarily the carry through until the end of the year. We're looking at integration with a new entry, we're looking at providing better attack exchange and tack creation, and that leads into claiming recommendations. That's basically the different dispositions across different application component alluded to earlier. So those will be provided through API so you'll be able to access and manipulate and work with directly from a new entry. Now on the ACD side, our job is essentially created more models more frameworks that will allow us to be able to discover additional frameworks. So our target list is essentially going down the priority what we have seen in terms of most popular Java frameworks. So with communities about my input, of course, that's just prioritization we can work with. Essentially, we always try to cater to what the community and our client is first. So that's general target is will try to address for Java frameworks by end of the year. On the Diva front, majority of the initial feature released so for the rest of the year we're working on integrations we want that. And with a new entry, I addition we're trying to shore up any of the issues, performance optimizations, as well as additional edge cases, we may not have considered in the current Diva release. So from a feature, from a feature release point of view, we're more or less set for year. But of course, you know, there's a lot of additional issues and content we have to deal with. Now, in terms of test driven automation side, we basically have the unit as release. We're providing the differential testing capabilities. And we're currently actually working on the UX test generation, as I mentioned before, and we're hoping. So because this is still our idea for at this point, we're hoping we will have a sufficient outcome somewhere between third quarter and the third quarter early fourth quarter this year, such that we can push that out. And at the same time, we will be working on integration was wind up an inventory. That's remote set so that you can access all these capabilities down the future from a single dashboard rather than go through its own individual to set. All right, thank you. Back to you. Thanks. Thanks, Jen. Thanks, Ramon, for walking us through this. So this is, I'm just blood weight. So this is really super exciting. I can't wait to get my, get my hands dirty on all the tools. So thanks for all the work you're putting in there. And that goes to all the teams. I mean, it just, this is just just a few words. So this is, this is a presentation what what's coming. You wouldn't know how hard all these teams and communities are working to get this out. So thanks on behalf of all users of all the tools. Thank you. So, let's go on to move to cube so when, when you have like already containerized or container ready let's let's phrase it like that applications like for example on Cloud Foundry Docker swarm or probably spring boot as a language platform. So a couple of features that I'm pretty excited about are the custom templates so so the opportunity to to customize our or migration efforts on that on that level. So let's let's the move to cube team talk to that. Thank you Marcus. Just kind of give a bit of a preview, and then hand over to a show to go through a lot of the detail. Again, I'm talking myself, we're part of the IBM research group. And we started off on move to cube couple of years ago because we saw teams in IBM IBM clients kind of saying okay we want to move to Kubernetes. But we never planned that as part of our application development staffing project plans and what they realize is it's months long exercise for the single enterprise app. You need deep skills in Kubernetes and deep skills in your source platform whether it's called foundry or Java enterprise stack whatever it is, and you need deep skills in the application itself. So really the question here was how do I reduce the skill requirement how do I use the cost of migration bring that months down to a couple of weeks or less. And that's what God is going and then we realize very soon that this is just widely useful. And so we contributed this into the conveyor community really looking forward to community coming around this. As Marcus said we really started off by looking at sort of container ready source platforms like doctors from cloud foundry. What we are realizing is a lot of enterprises out there looking to take small apps, maybe that are not container ready completely but still want to do a lot of the work to create your Kubernetes deployment artifacts, maybe you have to do some additional work to tweak your code. But a lot of the extensions and move to queue. I hope is going to talk about around windows support for instance is also addressing those additional type of applications. Another big one that's come up is how do I have a standard way of doing this if I'm going to migrate a thousand applications. I don't want every project team to figure out, you know how to use move to move and what are my enterprise standards I want to set a standard workflow, I want to create templates that everyone can use. And, and then that further simplifies the job for the, you know, for a factory model. So a lot of the customization capabilities and the plugin capabilities allow for that, whereas a migration architect I can set those standards and then roll out, you know factory of migration to my organization. I'm super excited. And with that, a show go over to you. You want to walk through this. Thanks for the good review. As Amit was mentioning, we have been working with a lot of teams who have been trying to get to Kubernetes and as part of it, all this roadmap is based on the experience that we gather from you. So do feel free to give back your inputs to the community so that we can adopt the plan to meet what the community really requires. I'm looking at the dates. We had capabilities around a spring boot and windows containers, which we saw a lot of people were requesting for available as plugins to move to queue. So move to queue in addition to its core capabilities where it can create all the components artifacts allows for plugins so that you can put in your customized agents you have a new language you need to support, you need not go all the way of understanding you just need a plugin and you can do that. Right now we have support for spring boot applications to be containerized and deployed to Kubernetes using plugins. We also have support for Windows containers, which includes a dot net four point X framework and also the dot net four and five, which can even run as the next containers. So what to keep as capable of understanding what needs to be containerized which way to run the best in Kubernetes, keeping into keeping into consideration your requirements like enterprise requirements. Going further, in June, we have a brand new release of Mootekube that has already taken place Mootekube 2 and we are started working on Mootekube 3 the next version. And if you go to our main branch in the community, you will notice that a brand of new framework for Mootekube which allows for a lot of customizations. Let's say you have a template that you want for your organization, you have a build config in a specific way or a deployment config that has some specific parameters. Mootekube allows for that. In addition to that, we also realize that every enterprise has their own folder structures Mootekube used to create a specific structure where the source code is some folder and then the additional Docker files go into their different folders and the deployment configs go differently. But as we work with many enterprises, we realize each one has their own standards and how do you fit that. And that is where the Mootekube's new framework allows for it. It can help you create a template or plugin or transformer for your application and your enterprise and you can give it to your thousands of teams who are doing the migration and it will be much more automatic than before. In addition to that, we are also making more things customizable. Helm charts used to have a few set of values which we standard be parametrized but now with the new capabilities that are coming in which will be there in a couple of days or so. We are excited about this because it allows you to create any kind of Helm chart that you want. You have a definition. You say, okay, parameterize in the deployment, parameterize these values and put it as these values as a Helm chart. It will create a Helm chart for you to create the customized YAMLs with the same specification and even OpenShift templates and form code. So really excited about seeing that in the community in the near future. Not only that, in addition to Mootekube helping you create this artifact, we are also getting you best suited for your requirements and that is what is the key to adoption and rapid migration. Looking further into Qtree, we are looking at getting the Springboard capabilities and Windows containers input into Mootekube so that you need not even use plugins and Mootekube will automatically understand that. And the one other interesting feature that is coming up is a Netflix Oasis where we are looking at, okay, you have a Springboard application, it has this Zoom capabilities, Eureka and stuff. When you go to Kubernetes, how do you map it? How do you map it to the right in-press configuration, the service configuration? Mootekube with this new framework which is pluggable brings in these additional capabilities which allows you to take each of these artifacts and customize it the way that you want. For example, you take your Zoom configuration, how do you map it to the right in-press? You take your paint tank which is now talking to Eureka, how do we change it and make it talk to the Kubernetes services. A lot of exciting things over there. This will really speed up how you can take a Kubernetes Springboard application which is deployed in third-party or any other platform and get it into Kubernetes rapidly. Not only that, we are also looking at supporting more and more platforms where you can run Mootekube. For example, we have an entire new framework of how these plugin systems work so that you can just replace one component of it. And that we are going to container-based transformers which allows you to use any of your languages to write a plugin. You want to write your plugin in Python, you want to write it in Golang, you want to write it in your shell script. That does not matter. Mootekube will be able to consume all of that and just replace one of it components and everything else will flow for you. More exciting is the operators that we are looking at. We are creating operators for Mootekube which we will be putting in operator hub. And also, as Amit was mentioning, we are looking at the factory model that for your enterprise, how can Mootekube help you create the right templates automatically? Right now, you can do it manually. How do you help create the first, your steering team, help create the first template that can be used by migration teams? Marcus, if you can go to the next slide. Yeah. And getting into the priorities. As far as the platform priorities, as we talked about Spring Food and Netflix Oasis or some of the top priorities that we are looking through. And we are trying to bring it natively into Mootekube in addition to the plugins that we have. We have support for Windows containers coming up. And as far as user scenarios, we have the user customization, as much customization as possible. And as far as the deployment flows are concerned, we are looking at operators so that you can get your application. Right now we use Helm charts. We are also looking at using operators so that the flow is much more streamlined in Kubernetes. Not only that, we do support Windows using WSL now. We are also looking at allowing for native support for Windows where the binary of Mootekube runs natively in your Windows platform. Those are all the updates from Mootekube site. Amit, anything else you would like to add? Hi, that's you covered it really well. Who do you notice? Okay, perfect. Thanks for sharing that. Well, interesting times. So many, so many cool tools, so many cool features. So pretty excited. So before we go to the Q&A, not sure. Chris, I haven't been following the chat. So the thing is, as always, in the best open source, in the best open source tradition, if you want to shape the future, just join the community, contribute and contributions can be as simple as fixing a typo in the documentation or sharing your experience with a tool providing feedback because that is invaluable. So please join the team, join the community if you can spare a few minutes of your lifetime contributing. So with that being said, that's about it from the community. Over all back to you, Chris, or whoever's been watching the various chat channels. Thank you. Wonderful presentation, wonderful roadmap. I'm excited about conveyor. I really am to be honest with you. I'm not just saying that because you know, y'all are on or anything. I think it's going to give us a lot of great capabilities. Folks, if you have any questions, I mean, we've done a good job of answering the questions and chat via chat. So if there's any other questions you want to ask to get answered live, please ask them now. We have about three and a half minutes left on the stream. So we can definitely fill that with questions if you got them. But otherwise, I think we're good. As far as questions from chat. But if anybody wanted to talk about the kind of migration process, right, with any of these projects. Feel free like common scenarios, use cases, etc. So I might speak up because that's something customers typically ask me, okay, so, okay, so we have a lot of good technology, cool technology, but actually how do I put it to work. So what do I have to do, etc, etc. So summing that up in a few minutes is a little bit of a challenge. So that would be one thing we should probably have as a conveyor, as a conveyor meetup. And there's already been some meetups touching these these items. But basically what what I suggest to customers in two minutes is basically the elevator pitch and the high rise. So what do I do. So first, I need to understand the real estate no matter what kind of migration it is. So I need to need to get need to get a good understanding of my application or infrastructure in terms of the M's infrastructure landscape. And then make obviously make the right decisions and then do a pilot proof of concept, if you will, and then start with a with a factory approach. This is also what the what the guys mentioned what we're in the application modernization space what we're typically doing is we're proving that the principle is valid and then start a factory approach because we're talking about one or two artifacts, be it an application or a VM or whatever. We're talking about hundreds and thousands so that needs to be a sped up. So, well, that was a bit like one minute of state the obvious. Right, if you've been, but that's, that's basically it's it's there's no new rocket science there's no pixie dust that you spray over your infrastructure and everything. Everything happens automatically. There's some things that need to be done and I think, since the question comes up, we should probably cover that in more detail in a in a conveyor meetup. Sounds great. The last question if you have any time I'm willing to give up a few minutes here. Any exciting like forklift qvert successes to share. Right, like, I know there's like a lot of testing that happens but has anyone taken the tools and make a big move of any type. Thanks for the question. We have just launched the tool. Okay, but what we've seen so far there's been a year since, since qvert rich GA in in open civilization. So, I mean this is the downstream for qvert and and we already applied a lot of customers in red hat, and we have managed to migrate several, several hundreds of VMs, but it was a migration done. Let's say manual, you know, like backup and redeploy backup redeploy backup redeploy. So this showed up that we needed this tool, and it came from our users telling us look we need something to streamline these, these VM migrations. So far we have already people using qvert in open shipping production, and we have already some interesting cases like for example, in one iron article company they have a simulator that is pretty old, but it's still producing very interesting data that nobody knows where's the code or anything like that so they put it, they took the VM like that into open shift they coupled it with tecton pipelines, so they could just spawn as many pms with that, with that simulator as they want. And they added to the mix cube cube flow. So they could generate a data set to be able to feed machine learning AI analytics, and it was a very successful example of, of how you can leverage the power of Kubernetes around VMs. So we expect this to happen in the future. When it comes again to the work that we've done in Red Hat we have started an early access program for customers, and we expect to have some success in the future. Well, crossing my fingers. That's awesome. That's, that's a fantastic use case success story, you know, that's, that's amazing. So thank you for sharing Miguel appreciate it. Marcus thank you for leading this effort and, and kind of pushing through the slides for us and thank you everyone for tuning in. Any last thoughts anybody before we sign off here. No. Okay, just a big thank you for having us, Chris. Yeah, it's my pleasure. It really is my pleasure. And thank you so much everybody for tuning in. We will be in about an hour will be starting our OpenShift Commons end user stories going to be kind of a marathon mystery science theater 3000 style watching of various talks and Diane Mueller and I will be commenting and interjecting here and there so kind of an experiment for us on the channel but also should be kind of fun. So please tune in for that. And if this is the last time I see you today, stay safe and have a good weekend out there everybody. Thank you.