 Good morning, good afternoon, good evening, wherever you're hailing from, welcome to another, the last actually, KubeCon EU session office hours for us here on the Red Hat team. We're talking to our conveyor team, right? The Move to Kube team. Yes, this is the conveyor talk that everybody has been waiting for and I've been anxiously waiting for. So I'm Chris Short, I'm going to hand it over to Josh Berkis from the open source program office. Josh, take it away. Thanks, I'm Josh Berkis. I'm Red Hats community person for everything cloud native and particularly that includes helping a lot of people move their applications onto Kubernetes. If you are in operations, I am sure you have at least one application that is still on some other platform that you would like to move. And to help you do that, we have here the team behind Move to Kube, a project for enabling applications. Amit, you want to introduce the team? Sure, Josh. Hi everybody, I'm Amit Singhi from IBM Research and my team here Ashok, Navha, Akash, Pablo and Hari. All of us have been working on this exciting new project, Move to Kube. Actually, it's not new for us, but it's new in the community and we're looking forward to talk about that. So we get going Ashok, we want to share this slide. Okay, so I think everyone's kind of familiar with Kubernetes, I'm sure. So what we're going to talk about is if you're not on Kubernetes today in your organization, how can this tool called Move to Kube help you get there with less pain and less time, okay? And Ashok, let's move forward to the next slide, please. So Move to Kube is part of this open community called Conveyor that we kicked off with a few different projects contributed initially by Red Hat and by IBM Research last year. So you can find it online at conveyor.io. It's essentially a community of people who are really passionate about Kubernetes and helping others modernize and migrate their apps to the hybrid cloud leveraging Kubernetes. And so it's also a collection of tools and best practices on how you can rehost your applications over to Kubernetes, how you can re-platform and how you can re-factor them, even re-architect them so that they run really well on Kubernetes. So moving to the next slide, Ashok, please. It's just like a quick view of what's there in Conveyor. Today we have five different projects there. Move to Kube is one of them. We're gonna talk about that, that fits into the re-platform use case. So if you have an existing application infrastructure, you wanna move it over to Kubernetes, but you're not really looking to redesign the code or the implementation. That's what Move to Kube lets you do. On the re-host side, we have Crane that helps you migrate your applications between Kubernetes clusters. So it migrates over the state, so it's like a live migration. And then we have Forklift that will let you take your VMs and move them over to Kube work so that you've got your VMs running on Kubernetes. And then finally, there's Tackle on the right-hand side that lets, you know, that actually does detailed source code analysis to analyze your applications and help you move them closer to containerization. And then Polaris that helps your organization measure the impact of changes that you're making in your software delivery so that you know you're actually moving in the right direction and improving the metrics around software delivery performance, right? So a variety of different tools. With that, I'm gonna hand it over to Ashok to take us through Move to Kube. Ashok, over to you. Hey, thanks Amit and thanks team. Thanks for everyone for joining in. So here, let's have a quick view of what Move to Kube is so that you have a preview of it and quickly to ask your questions about it. So when you are writing your application or you have been posting your application production, it might be in any of this platform, right? You might have already been in Docker Swarm, or you have already containerized your application or you might have your application running in Cloud Foundry or you might have your having your J2E application or just running in your VMs, right? Move to Kube is about helping you automate the journey of getting to the Kubernetes platform in the most cloud native way. By the most cloud native way, what we mean is, what is the most general way if you are developing an application from scratch for cloud native? You create your Docker files or cloud native build pack and containerize your application. You create your deployment YAMLs and all of it. So Move to Kube takes all the artifacts that you currently have and tries to automate the process of creating all these artifacts that you see in the right hand side. It might be your Kubernetes YAMLs, the authorization scripts like Docker file, cloud native build pack, S2I images, Helm charts, customized YAMLs or their OpenShift templates. It can even create an operator or K&A to artifacts if you're interested in them. And also, if you are putting your application in production, you might be interested in CI pipelines like Tecton. So it will create all those artifacts for you which will be bootstrapped and you are almost close to be your deployment in a matter of minutes. At a high level, if you are looking at what Move to Kube is, it helps you discover all your artifacts, helps you containerize it, translate and create the right destination artifacts and customize it for your particular deploy. We will see a quick demo of it in a minute. So Move to Kube is completely in the open source. You can just head over to this project and look at everything there. It's a command line tool, but we have a web interface which grabs the command line tool and all this functionality. Let's quickly look at the web interface. So this is the Move to Kube website to get your web interface going. All you need to do is to go here, just create a empty folder and copy this command over here. So let me just quickly do that for you. And Move to Kube will be up and running in a matter of seconds. Okay, now Move to Kube is up and running. Let's head over to the website. Okay, so here is a Move to Kube interface. Let's create a new application. Since KubeCon is going on, let's call it KubeCon demo. And what I'm going to do is to now take some source artifacts. The platforms that we saw. Let's take an application which is multi-component, which has multiple languages. Let's say an application which has some Golang components, Java, Gradle, Node.js, Java, Maven, Python and stuff. Let's say you are an application which is really polyglot and has all these languages. All I'm going to do is to zip them all into a zip file and upload it. And let Move to Kube do the job for me. So what is Move to Kube doing now? So what it's doing is it's going to each and every folder and every file in there trying to analyze, okay, is that a cloud-parenting manifest? Is that some runtime information about Cloud Foundry that is there? Or is that a Docker file? Is that a Golang language application? It is trying to find all of that and tries to find any links, right? If the service talking to something else, it tries to find all those links and then it will come up with a proposal. Okay, these are the applications I found in your folders and this is how I think it can be ported to Kubernetes. So in this case, it's saying, okay, I found a service named Golang and it can be containerized in three weeks either with the Docker file, S2I majors or CNB and it has more details of it. It's just a UI to YAML that the command line tool generates. This is the YAML you will see if you're using a command line version of it. So, and then you have your Java applications and all of it listed over here. So what we will do now is let's go to the next step. Let's create the artifacts based on the plan that we call. So what I'm going to do is to give next and then I'm going to tell it to containerize all my applications. These are the services that it found. And then what I'm going to do is to, okay, I am interested in a Docker file, let's say. So I'm telling you to create a Docker file and then it says that, okay, I find a Java application which I can host either in JBoss, Liberty or Docker. So let's choose the JBoss one in this case and then let's say it's asking, which platform do you want to really deploy to? Motocube understands the variants of Kubernetes most elements, right? The kinds, the versions and stuff. So we can really target the artifacts for that particular version that you're targeting. It can be even your custom cluster. Motocube has a base where you can collect information about your particular cluster and really target the artifacts for that version. And then you just click next and it asks basically the questions. It cannot find an answer to from the source, right? For example, you have your application, which ingress you are into, you want to post it to. So it's asked those questions. Let's keep pressing next. Okay, and then what it does is it asks the more specific questions about your destination. Like in this case, it's asking, which image registry do you want to deploy it to? Let's say I will go to us.icr.ivo, which is the IBM Clouds registry. And then I will put my images into a name space called intricate demo in the registry. And then I'm going to use an existing full secret. I go and it's asking for the ingress URL. I'm just going to head to my cluster and copy the URL from there and put it over here. And just copy this in a minute and let you know why. So the next thing it asks is a TLS secret, which in case of IBM Cloud is just the first part of your URL by default. So in just putting that in, and then what it is doing is it's now creating all the artifacts that you are interested in. Let's see what it has created for us now. So I'll just close this for a minute. And you will see that all the artifacts that we need will be generated here. So I'm just going to download those artifacts and let's see what the artifacts are. Okay, here is a set of artifacts that it has generated. As you can see, let's explore these artifacts. So what it has generated for us is it has basically taken your source folder and it has created the Docker files for your different components. In this case for Golang, it has created a Docker file for each of these components that you have created. In addition to that, it has also created all the scripts or the YAMLs required for your deployment like the Tecton pipelines and the Docker Compose file for your local testing, the Helm charts for deploying your Helm chart and the K and A2 services if you are interested in serverless, the customized base YAMLs overlays for your different deployments and the OpenTrip template, the Helm-based operators and the normal YAMLs if you're just interested in deploying it directly. And it has also created some helper scripts for you if you want to try locally. So let's quickly try it locally. In the meanwhile, if there are any questions, I can definitely take it. So let's go to this folder. Going to do again. Yeah, so for one of the questions, when you actually get to this, if you can, well actually you're already showing some of this. So one of the questions was about whether or not you could customize the artifacts that it creates after it creates them. Here's your showing that now. Absolutely. These are the artifacts. So the way we look at Motocube is it tries to get you to every 95% of where you want to be. And it's a one-time process. And then you can edit them as much as you want and deploy it. That is one kind of customization, right? To cater to your needs. In addition to that, what Motocube also allows is to customize using some scripting languages. We will get to that in a minute, okay? So let me quickly show what it does. And then I'll just answer your question. Okay, so here there is a script which can be used to build the images. I've already built it so that it does not take time. So I'm not just going to do it. And then there is a script for pushing the images. And once you do that, all you need to do is to deploy the Helm chart. I've already have a Qubectl in my context. So what I will do is I will just click on a Helm, deploy Helm, and it will deploy the Helm chart and get me the URLs that I can access the application in a minute. So it's, so while it is deploying the artifacts, Joshua, the base that we think of customizing or the Motocube for your needs is there are two base that we generally think of. One is the scripting base. So you have your default Motocube. And then there are some Starlock scripts that we use to say, okay, this YAML is fine, but I need some annotations that needs to be added to it. Or you need to change, okay, I use this particular full policy. So it allows you to use the Starlock scripts to do it. Starlock is essentially the one that is used in Basel. The second kind of customization Motocube also allows us, you have your project structure in different way, right? Motocube by default understands a lot of languages and different folder structures, but you might have a specific way in which you put your customizations. So we get a lot of plugin-based containers. We will also look at that in a minute, okay? So here it has deployed the application. So I can just head over here and I can see that the application is already posted in the cluster. I can, and I can access the link over here, okay? So that's a very quick demo of it. So what we will look at next is to answer Josh's question. So let's look at how do you customize the artifacts that are generated by Motocube. For that, I would like to invite Hari to give a quick demo of it. Hari. Hi. So can I share the screen? Yes, please. Can you see the screen? Yes. Okay. So, hi, Arun. So I've prepared a small application. It's a website with Rust API. The website is using Java and the Rust API is using a Python server. So we have a deployment for the website and the Rust API. We have services that expose those deployments and we have a ingress so that we can reach them from outside the cluster. So basically somebody asked about can we customize the artifacts? So we provide a method for customizing using a scripting language called StarLock. It's very similar to Python. So here we have a StarLock transform and I'll just show, we have two transforms here. One transform is doing simply adding a new annotation. It's a common annotation to all the resources or the deployments. And another one is simply setting the replicas. These transformations can also interact with the user. So here we are asking the user what should be the number of replicas for each service and we're setting that replica. And the StarLock transformations, so you might be familiar with tools like customize and so on. Those allow you to customize those resources but StarLock gives you even more control because it has access to for loops, if conditions and so on and more complex data structures. So here we are going through all the containers and setting some resource constraints. So if your organization has some default limits you can set that. So these two files are the transformations that I'm going to demo. So I already have Moody Give installed. We'll just do a translate and with the minuses flag we'll give the source YAMLs and then with the minus three flag we'll just give the transformations. So now it's asking, these are the same questions as we saw in the previous demo. But a few of questions because we're not doing a full translation. We are simply doing some transformations. So here, these are the questions that we configured. What should be the number of replicas for the service rest API? So we can give like two and three. And you can set defaults and hints and everything. So now it has done the translation and we can look at the output. So the output in my project. So here we see all the same artifacts as before. We'll just look at the Kubernetes YAMLs that it generated. So what changes did it make? So here first of all we see the common annotation that it added to the deployments. We can see that it changed the replicas. To the number that we specified. And also it also set those resource constraints for each container in the deployments. And also it's more powerful than that because sometimes you might have older versions of this YAMLs. So if you look at these deployments, these are, this one is extensions V1, V1 which has been deprecated. The newer version is apps V1 deployment. But if you look at the output, we've automatically converted it to the latest version. So these are apps V1, apps V1. And same for the ingress. The ingress was extensions V1, V1. And now it's a networking KIO V1. So this is more than just a simple string change because if you look at the older ingress, it had a different, especially here, it had a different format, a syntax. In the newer version, it has a different syntax. So it is actually understanding the services, the parts and doing the correct translation. It's not simply changing the ABA version string. And yeah, so that's basically what I want. Any other questions? Also, I see an interesting question on the Windows containers. Yes, Motokyu can handle Windows.net framework and stuff. And we have an interesting demo of that too. We can look at it, Josh, whenever the time is right. Okay. Yeah, and actually, so are we gonna do that demo later? Yeah, we can do it now. Okay, cool. There's a follow-up question in that, right? Obviously, because it's more complicated than just creating the home charts. Absolutely. Naba, can I invite you to give a quick demo of the Windows platform translation? Sure, Ashok, thank you. Let me share my screen first. Is my screen visible? It's coming, hang on. Okay. There we go. Yeah, thank you. So before I start giving a demo about the Windows container, I just wanted to give you a brief introduction of it. So Windows container support in OpenShift started off in the later part of last year, sometime around December. And one of the intentions was there are different parts in which existing Windows workloads, which have been there around and are being used by enterprise applications and so on. How could we find a path for these applications to be re-platformed so that they can avail the benefits of OpenShift or Kubernetes environments? So one approach is to use OpenShift virtualization and lift and shift the workloads directly and put them into VMs as it is, but they won't be using the benefits of the fully containerized mechanism and orchestration mechanisms that OpenShift provides. So the second approach is to provide a Windows container wrapper around this framework applications, especially for legacy applications, sorry, the existing framework applications which are there, such as .NET 4.8 and so on. The third approach is redesign these applications around .NET 5, which has emerged recently and availed the Linux containers wrapper and then deployed them onto OpenShift. So these are some of the parts that Windows containers, so Windows workloads can avail to run on OpenShift like advanced platforms for container orchestration. Now, the demo that I'm going to give you is for both of these types of frameworks. One is for .NET 5. I'll be using a .NET 5 app and also a Windows Communication Framework Service app which is making use of .NET 4.8. Now, these apps will be accompanied with certain plugins just as what Ashok mentioned. So without actually changing mode to queue, we can insert plugins which can understand these apps and generate the desired containerization that is required for these apps. So these are the two plugins, one for .NET 5 and one for Windows WCAF app. And now let's use the UI to upload this app to the multi-cube and see how it processes it. So let's create an app called .NET3Plat. And let's upload our applications and the corresponding plugins to this app. So the flow is very similar to what Ashok just showed. So the applications and the corresponding plugins are processed in multi-cube and it will generate a plan and it will show the various containerization strategies as shown below. So once this is shown, here you might have noticed that there are two apps are detected, one for .NET 5 and one for the WCAF. And new Dockerfiles generation format is recommended for each of these apps. Now, once this is done, we move on to the translate phase to consume this plan and go through the translation process which was shown before. So we select both these services, each of them have been detected separately by the plugins and we choose the new Dockerfiles generation mechanism. We use Kubernetes as the target platform. Select the services, both these services to be exposed and the ingress paths and so on will be generated according. So this process, once it is done, it will generate the artifacts that we want and we download it, yeah. And we have a downloaded version of this already to save time. So as you can see, this downloaded replatformed artifacts, target artifacts contains the Dockerfiles that are required to construct containers for each of these applications. So for instance, for the WCAF app, let it know please. WCAF app, it's a two-phase container Dockerfile that is generated. One to build the app as seen can be seen from line 15 to 19. The second one is the run stage where the build binary can be run as part of the runtime container. And similarly for the W, and you can note the 4.8 framework that has been automatically detected and used to find the appropriate base image. Similarly for the .NET 5 app, we have a Dockerfile which produces the, which has the images for generating the, generating the published artifacts for .NET 5 ASP app and also to run the runtime to run this published artifacts. Now let's see how we can build this app using the container Dockerfile that has been produced by Mutecube. So we've built it. As you can see, the image has been successfully built. Now we run this image and please note that this is a Windows container and not a Linux container. And this is running natively on a Windows bare metal machine. So I'm going to run this service and this is a native WCAF .NET framework service. I'll run the client to hit this service and get a response from another container. So as you can see the service container has been detected and a request has been sent and the response has been obtained. So the exposed ports. So this is about the .Windows containers and support that Mutecube provides to containerize Windows applications and avail the Windows container facility and eventually find their way into OpenShift platform. So if you have any questions, please let me know. Cool. One of the ones from the original question also is that obviously one of the things that somebody with the .NET app is going to be looking at is migrating not just from non-un-Cubernities to uncubernities but also at the same time, they have to move the application from .NET framework to .NET core so that it'll run uncubernities. Yeah. So currently Mutecube provides a replatforming solution. What is being suggested by the question is a re-factoring solution in the sense that you have a .NET framework which uses a older version of .NET. They say 4.8 and you want to convert that to .NET 5. Now that requires redesigning the application to consume the advanced features of .NET 5. Now that is not what Mutecube provides but if there is a 4.8 version of framework or 4.7 version of framework or similarly .NET 5 version of the framework, Mutecube detects the app's requirements and accordingly generates the containerization strategy. So that is what Mutecube tries to, that is the gap that Mutecube tries to fill in. Okay. We have another question unrelated to Windows containers or .NET containers. And I think this is probably more of a migration advice thing. Although I'm curious as to whether or not Mutecube can help with us. We have an admin here whose coworkers have do a lot of work in Jupyter Notebooks. You know, the data analytics platform and the process of moving that infrastructure onto Kubernetes is complicated. Is there any part of that that Mutecube can help with? Currently we are not handling any Jupyter Notebooks. We are not trying it but if there is a interesting use case we can discuss about it in the community. So I would request the team to come out to the Conveys Slack channel in the Kubernetes workspace and we can have a discussion around it, the exact use case on what we are trying to manage. Yeah. I mean, I'll say from my own personal experience that there are some major issues with running Jupyter Hub on Kubernetes that are limitations of Jupyter Hub itself, which is particularly around access control, which is not something that we're gonna overcome and conveyor. That's kind of up to the Jupyter team. The followed question in the .NET. In order to move on to Kubernetes, do you have to re-architect as microservices or can you just take the monolith and turn it into a monolithic pod service? I think it is possible to, yeah, please continue. Yeah, so basically it is about this, right? Like what in, as you saw the different tools in the Conveyor community each has a particular scenario that we are trying to take apart. The first thing that we are trying to do with re-platform is to take the monolith and run it as it is in the Kubernetes platform. So that's what Mood to Cube will have. Once you are there, you can then re-factor your application, big monolith into multiple components and deploy it as that. There are tools like Tackle and a few more, which can help you with that as part of the Conveyor community. I may add one more point there. The refactoring to microservices tends to be driven a lot more by the needs of the business rather than whether the platform or technology considerations because refactoring to microservices means additional work, not just for the refactoring, but also for then how you change your DevOps processes around the microservices. So in a sense, it's an orthogonal point, although it comes together with re-platform to Kubernetes often because the microservices let you leverage the benefits of Kubernetes better. So it's not mandatory, but if the business requires it, then it's good to consider. If the business doesn't require it, it may actually be unnecessarily overhead if your app is not that large that it won't run in a container on Kubernetes. So another, again, not a .NET question, but another question in general about migration and Move2Cube out of Slack, which is somebody has source code repos that have scripting for building VMs and deploying them. And they wanted to know whether or not there's any tooling in Move2Cube to help them convert that VM generation code into Docker file generation code. So the way Move2Cube handles this, so for example, they have a VM generation code, but there is some source code that is at the end of the day contributing to the VM. So if you have an application which is being built and then the binary is being put there. The way we do it in Move2Cube is that we target the source directly. So let's say you have five or 10 gate repositories where you are taking the code from and then compiling it and putting it into the VM. You just point Move2Cube to the source code and it will find how to containerize each one of the components and put it into Kubernetes. So we do look at the base artifact session sheet. That's how we have it. Okay, we'll see whether or not they come in there. Their actual use case apparently is VNFs. So Telco VMs and they're trying to move to CNF, which sounds like the kind of use case that we might at some point have tooling to optimize for given how much of it we do. Dude, do we have that right now? Is that like a project that we have right now, VNF to CNF? We haven't tested that particular thing but our colleagues in IBM research are all into VNF stuff. So... That almost feels like it might eventually be a separate conveyor project that embeds Move2Cube because it's kind of a special use case. It is. And in fact, that's a really interesting one. I think one of the challenges right now is that Kubernetes itself is not necessarily optimized for networking workloads like VNFs because they need certain performance guarantees at the networking level around jitter and latency and things like that. Yeah. So yeah, so there's a lot of interest and work but it's exploratory because we're also looking for the platform to catch up. And that's why I was thinking we would probably need like a special tool set, right? Because when we're looking at VNF or we're looking at CNF, I know from working the guys is you're not looking at just deploying regular pods and stuff. You're generally going to be using something like Kuberer-Cotta containers with multis in order to have that whole network infrastructure in the deployed application. Yeah. Yeah, the other project that's kind of interesting is forklift because it moves your VMs into keyword. So I mean, you might consider that if you're not able to containerize some of these workloads. Again, you still have the challenge that the virtualized network may or may not satisfy your performance requirements. Yeah, it's still a work in progress. Work in progress for sure. Okay, I think we're on top of the questions for now. So if you had a next part. Sure, so actually we just saw a quick demo of .NET, right? So the other big platform request we generally get is Cloud Foundry. And generally people use a Spring Boot along with it. And we have Pablo from our team who is from our Tokyo lab who has been working really into it, trying to see how good the queue can translate. We can probably have a quick demo of that. Sure, let me share my screen. Okay, can you see my screen? Yes, sorry. So yeah, this is a quick demo on basically using move to queue for migrating Cloud Foundry Spring Boot based application. This is a kind of simple application. You can see here the .xml file. There's a lot of information in terms of the packaging and the Java version, et cetera. And the main challenge when we are trying to migrate these type of applications is the diverse sources of configuration files. And in that sense, the challenge for move to queue is trying to capture all of them, integrate them, aggregate them and then use that information to generate the target artifact. For example, the Docker file. So based on this application, you can see we have Cloud Foundry Manifest. We have an application with properties which is common on Spring Boot applications. So let's see how move to queue works in this context. So for this demo, I will use the command line version of move to queue. So let's say I have already moved to queue installed too. So we can do move to queue. And here you can see it's just a path to the actual application. So here we are just running the plan phase, same as I showed again and nobody before on the UI. Here we can see how the output that we obtained from the command line. It takes a little bit of time, but it should be done just in a moment. Okay, so here you can see how move to queue generated the plan file and it's located here. So based on that, we can just execute the second phase which is the translate. And here you can see how move to queue identified this as an application based on cloud foundry. And we go through the same question area we saw on the UI, but in this case we can interact directly with the command line. Here you can see move to queue tells that one of the available options is Spring Boot because identifies the application using that framework. So we go with that. And the rest of the questions we, let's keep it the default values just for this demo. So based on that, we have an output folder that was created with the name myproject. And let's inspect what is inside. So we can do code. So this is the myproject folder that move to queue just created. And here specifically in the source folder, let's take a look at the generated the graph. So here on the right, we can see how to move to queue. First of all, move to queue kind of like a reason on several pieces of information that capture from the application. And for example, it has to install Maven because we know these applications Maven based, right? Based on that move also move to queue adds these instructions to the Docker file to generate the final deployable file. We know that it's a war based file. So move to queue also takes care of that. As this application is not based on the embedded server that is part of or the default behavior on, on Spring Boot move to queue selects the more appropriate image from Jboz wild slide that is compatible with the actual Java version that this application has also moved to queue takes care of all the ports that needs to be exposed. And finally, in the last line of this generated Docker file move to queue identifies and builds the path to the deployment file and copies to the deployment of the folder. So you can see how move to queue is able to support this type of migration where we have several technologies such as Cloud Foundry, Spring Boot and what is the kind of output that it can provide to the user. And from here, let's say the user can continue, right? This is probably there are some other changes that can be done, but probably move to queue it gives a good candidate for the deployment purpose. So that is basically how we are handling currently this scenario. Thanks Pablo. So as we just saw, it was a very small Cloud Foundry app and it was able to find it was a Cloud Foundry Spring Boot and get all the right parameters for you. And as we initially talked about in the start of the sessions, this can be customized for your specific setups based on the needs. So, yeah, so the other thing I would like to probably show is there was an initial question on how do I try out Mootekube, right? You can definitely head over to Mootekube.conveyor.io and try it. In addition to that, we also have a category of scenarios that you can try. We are building more and more scenarios, but we initially have a seed. We can have a quick preview of that. Josh, sorry, can we have a quick preview of that? Is that fine? Okay. Yeah, go for it. Yeah. Yeah. Well, you're setting that up a quick question. Again, a lot of questions that are more general migration and not necessarily things that are included in the Mootekube functionality right now, which was somebody was actually asking about, you know, they have a set of rules about network access. They have a bunch of network access rules around their current VM infrastructure and they were asking about how to convert that to network policy in Kubernetes. Sure. So, by the, so for example, as you might see here in this particular demo, where it's about Docker Compose to Kubernetes. So the Docker Compose environment has a concept of network. So if that source platform has a concept which is very similar to network, Mootekube can understand that and create writings for you. So for example, if there's a network here it will create the right network policies for you. And you can, we can even be extended to create SEO policies and stuff. Yeah, that is what the organization requires. So by, it all depends on the information that is available in the source code. And if you want to give additional information you can always query the user to get additional information and input them as their policies. Thanks. Thank you. Akash, do you want to pick and throw this? Yeah, sure. Thanks, Akash. So if you are new to Mootekube then you don't need to worry. Mootekube is available on Catacorda. You just need to go to catacorda.com slash Mootekube and then you can quickly get hands on using Mootekube. So I'll quickly go over one of the scenario that we have. So you don't need to install anything on your laptop, on your local machine and you can try out Mootekube directly on the Catacorda. So the first step we are installing Mootekube and now Mootekube is installed. So we have a full-slash demo of Mootekube here using our sample Docker compose file. But for this office hours we will use an open-source project and try to generate the deployment artifacts for this open-source project using Mootekube. So I'll just clone that repository and next I'll run Mootekube translate over the project. So Mootekube is asking some questions and it is trying to internally it is creating a plan for you and it is going through the folders and trying to identify whatever services it can find based on the Docker files and the Docker compose files which are present in the source project. Yeah, and as you can see like Mootekube supports a multiple sources like CNBM stuff. So for a particular client you are interested in only one of the sources you can select or deselect it to speed up the process. For example, cloud native build pack images generally take a lot of time to bring in the GB of image sizes. So based on that you can speed up the process. Sure, so these are the services that Mootekube has identified based on the Docker files and Docker compose files and now it's asking whatever services we want to select and then what container technique we want to use and then it has to select the cluster type and whatever services we want to expose we can select or deselect here and then provide the paths to be exposed for all the services. It's asking on which path we should expose the service and then some more questions. I'm just quickly going with the default answers here and we will go to see what are the artifacts that Mootekube has generated for us. So here are the artifacts. We have a readme file which is giving the instruction like how we can deploy the application to Kubernetes following these steps and we have the Tecton CI CD pipeline related artifacts and the help chart and EMLs which are required for deploying to Kubernetes. So in this way we can, if we have Docker compose file then within minutes we can generate the target artifacts using Mootekube and I would encourage all of you to go to our Ketakota tutorials, Ketakota scenarios and provide your experience and feedback on our community selection. Over to your show, thanks. Thanks. Okay, Josh, if there are any questions we can take it or I will quickly flash one screen there with that contact. We can have more questions. Yeah, I have a question about a completely, well actually it's not completely different topic. This is getting started question, which is somebody wants to know do they have to have Docker installed in order to use Mootekube? So was the question to, is Docker required for trying out Mootekube? It was Docker required if they want to use Mootekube on their desktop in order to convert source code. Not necessarily. So if you need to use cloud native build pack containers which are based on images then at least you need to have either Podman or Docker either one of it. It supports both of it. Cloud native build pack is not required. In your base machine, you don't really require containers as such. You can use a command line tool and you'll be outside. Okay, but a developer whose laptop is, for example, running Windows home would need to find some other platform to work on then. Yes, we have tried it on WSN. So that could be something that they can try it out. Okay. So if they have the Windows edition, they can just install WSN too, Linux environment to work with. Or the other way around is they can bring up the Docker image in the Windows mission, in which case they are just using the browser. Okay, go ahead. Okay, great. So we saw a lot of functionalities on Mootekube. Many of them are already open source. It has been tested in a lot of scenarios. And right now we have the 0.2.0 alpha version and 0.1.0 release versions in the open. We have an exciting plan over the next few months where we are looking at taking all this functionalities and enhancing those based on what the community needs. So if you have any requirements or any questions or any use cases to head over and feed us the input either in the Slack channel, in ktas.slack.ivo, Slack.ktas.ivo and the Conveyor Community channel over there or in Mootekube-dev, Google group, you can head over there. All the links are there in the mootekube.conveyor.ivo website. So the few of the things that are coming up is we are planning to support custom templating for our YAMLs. We support a lot of artifacts right now, but what if you have a CRD? So that functionality is coming in quite soon and we are adding more enriched containerizers which can use more interesting techniques, machine learning techniques and stuff to get more parameters from your applications. And all the things that you saw which are plug-ins right now like springboard support, windows container support and stuff we'll be adding it as part of the base Mootekube tooling itself. And we are looking at Netflix Oasis, Argo CDs and more. And this is the prioritization at this point. All these things are subject to change depending on your needs. So do head to the channels and let us know. Yeah, so Josh, that's a thing I would like to... Cool, yeah. So I was just looking to see whether or not we had any additional questions from that. The other couple, but we're actually kind of at the end of our time. And people are starting to sign off for the day because it's the end of Guccon. Yeah. So I'll just say if you want to wind it up any last thoughts on migrating to Kubernetes and Mootekube on where we are with Cloud Native today from the team. Oh, wait, wait, wait. Ooh, okay. Actually one following up on the earlier spring boot thing. Oh, no. Sorry, this is advice for working around the windows home restrictions. So I'll just let that user read that. So last thoughts to wind it up. Yeah, so this is a community effort. All of us are there to get your feedback and improve it. And if you have any contributions, we have interesting stuff in Mootekube, right? And Go language, JavaScript, and a lot more. So we value your contributions to come over. We will help you out getting started and taking your contributions into Mootekube. Yeah. And team, if you have any other points, please do. Thank you to the folks who spend the time to listen to us and for the great questions. I think that they'll definitely make us think a bit more about some of the dimensions. Awesome. So that's all the time we have. And I appreciate everybody coming on to talk about Mootekube. I appreciate everybody listening and tuning in out there. So Mootekube might be over, but OpenShift TV lives on forever. At noon Eastern today, 1800 CEST. We're going to have an OpenShift Commons briefing with Kristen Macklemore from Red Hat talking about scaling the portfolio wall. So please feel free to tune into that. And as always, thank you, Josh, for running these office hours during CubeCon. They are always informative. Thank you. And thank you for hosting and streaming us. And we will actually have future office hours events on OpenShift TV. We have some teams who are interested in doing that. So just subscribe to OpenShift TV on, say, Twitch is a good way to be informed about upcoming shows. We have a streaming calendar. I'll drop a link in chat if you're curious. Go ahead. Subscribe to that if you want. YouTube is a good place to. And if you are in the process of migrating to Kubernetes. Like I said, Conveyor.io is not just a set of tools. It's a community for practitioners for people to help each other around all of the many complicated problems. We have with migrating. So please join that for some peer to peer help and sharing with the community of people who are facing the same problems that you are. And thank you very much. Thank you all for your time today. Thank you very much. Thank you.