 Thanks for joining us everybody for this talk on the conveyor community and move to cube. So today we're going to talk about a bunch of different projects or one project in particular, but we do want to let you know that there are a number of toolkits and web services that Red Hat and our partners in the community are working on. The first is actually migration toolkit for applications, which helps you break down your application specifically Java applications and helps you understand how you can modernize them. So this would include migrating from something like Spring Boot to Quarkus and modernizing your applications to see if they can run on OpenShift. Another toolkit we're working on is called migration toolkit for containers. This is a toolkit that helps you migrate your applications between different Kubernetes clusters. In this case specifically it was designed to help customers move between OpenShift version 3 and OpenShift version 4, so it will migrate your namespaces, your objects and your persistent data. A third toolkit is the migration toolkit for virtualization. This toolkit is actually under development currently and we expect to have it released later in the year. And it is a toolkit that helps you mass migrate virtual machines from VMware into OpenShift virtualization. So being able to bring your virtual machines forward into a Kubernetes orchestrated environment. And today what we're going to focus on with both Amith and Ashok from the IBM research team is a tool called move to cube. And while this isn't officially supported toolkit from Red Hat, we're excited that it's been open sourced and really they're going to talk to you about how it can help you increase the efficiency of moving from swarm and from cloud foundry to a Kubernetes environment and how it was developed and they'll actually give you a demonstration of that today. So really what's important here is how not only just the toolkits that we're developing but how we're going about developing this. In the Red Hat fashion, we've open sourced all these tools and we are actually actively developing them in a community called the conveyor community. And the conveyor community is really Red Hat's really attempt to start building a catalyzed community around the tools, best practices for how you break down monoliths, how you adopt containers, how you embrace Kubernetes. So we're excited about this. If you're interested, please join us. It's a completely open community. We would welcome any contributions or just any involvement and participation in any of the talks or meetups that we're scheduling. With that, I'll hand it over to Amit. Hi everyone, thank you James. So here we take a look at a bit more detail on what Move to Kube is about. As James mentioned in this community of conveyor, we are building a variety of tools that help you move your applications to cloud native. And one of the challenges that we find our clients and users out there running into is when you're moving from one of these platforms such as Docker Swarm or Cloud Foundry over to Kubernetes or OpenShift, you need deep skills in both the source platform and the target platform. And of course skills in the application itself, good expertise and knowledge about the application. And Move to Kube tries to accelerate that process by taking away or reducing those skill requirements and the time required to perform these re-platforming activities. So Move to Kube starts off with any of these source platforms and the first thing it does is discover your deployment specification on these source platforms. So if you've got containers running on Docker Swarm or an application running on Cloud Foundry, it'll go and discover everything that's needed to specify that that particular deployment. It'll translate that over into a Kubernetes specification. And in the cases where the source platform is not already containerized, for example, Cloud Foundry or say native Java applications, it will actually containerize them for you. And it provides three different mechanisms for that containerization. You can choose a Docker file based containerization approach. So you actually get a Docker file out of the process. You can use cloud native build packs if you're coming from something like Cloud Foundry. Or you can use S2I source to image to go directly from your source code into container images running on OpenShift. After translating into a Kubernetes model, Move to Kube then adds in the best practices and features that are relevant for Kubernetes that were not relevant in the source platforms. Allows the user to actually customize it as per their needs, as per their enterprise requirements, for instance, and then generate the deployment artifacts that you can take and deploy onto a target cluster, or you can pull into your target CI CD pipeline. So you get your Kubernetes YAMLs, but Move to Kube will also generate Helm charts for you. It can generate an operator to deploy that application easily. And it's starting to support certain K-native capabilities, in particular K-native serving. And the team has also started generating some artifacts out of Move to Kube for Tecton pipelines. Overall, N2N Move to Kube really tries to provide you an integrated tool that performs discovery of your deployment spec, containerizes them relevant, translates over to Kubernetes, and then optimizes and customizes all the way to a set of deployment artifacts that you can use. So with that, let me hand it over to my colleague Ashok, who is going to take you through the details and demonstration of the tool. Thanks, Amitav, for the overview of the capabilities of Move to Kube. Now, we are going to enter a demo mode where we are going to look at how these capabilities are encapsulated by the different workflows of Move to Kube. Move to Kube can cater to very complex scenarios where you have multitudes of applications with different characteristics, or you can handle the simpler use cases too. And Move to Kube has the capability to be integrated without a human interaction or with a human interaction. Let's see how each of these flows can be embodied in Move to Kube. Move to Kube can be consumed in two different ways, one as a command line tool or as a web interface. As a command line tool, you can get it installed using the curl command that you see here. You just copy paste it into your terminal and you have Move to Kube in your system in a matter of seconds. Or you can grab it from the GitHub releases or using the go-get command. Similarly, the web interface can be installed using HelmChart or Docker Compose or as an operator. The repository indicated below is a command line tool repository. You can head over there and ask us any questions as issues or interact with us. As I mentioned, Move to Kube can be consumed in multiple ways. The simplest of it is a single command flow where you just invoke the Move to Kube translate command with pointing it to the source folder. It will then interact with you and create all the target artifacts that are necessary for deploying that application. Which you can then put it into your CICD pipelines or directly deploy to your cluster. For more complex scenarios, Move to Kube can be consumed as a two-step process or a three-step process. In a two-step process, first you do the plan where you point your Move to Kube to the source folder which analyzes the source folders and give you a plan of the different services identified that and how it is planning to translate. You can then edit it and give hints to Move to Kube on doing a better translation. Once you have the plan ready, you can invoke the translate phase which will then create all the target artifacts for you. And then there is an optional step. If you want to analyze all the runtime artifacts, you can do the Move to Kube collect and it will automatically look at all your runtime instances that are in context in your terminal. For example, if you have a CloudFontain instance, it will get all the information of all the built parts and also the information on the Kubernetes clusters that are there. We'll look at what information it collects in a short while. But this information can then be put as part of the source folder that you point to in the plan phase which will then look at all of them together and guide you through the process of using them as part of the plan and also the translate. Now, let's look at a few demo flows where many of the source platforms can be translated to Kubernetes. The demos that we are going to see today are all available in the Move to Kube's demos repository, which is right here. And you have the tutorials folder where you have MD files explaining the different steps you need to follow and the sample applications are there in the samples folder. We are going to use a checked out version of that folder right here. And what we are going to do is to go through a few flows. Let's look at what is inside the samples folder. The samples folder, as you can see, has multiple different kinds of applications. In this, we'll go through a few of them to look at the major capabilities of Motikube. The first one that we are going to look at is the E2E flow, where there are two applications, a Golang application and a Node.js application. This could be a Cloud Foundry application or it could be a normal application you have deployed to your VMs. We will look at how this can be translated to Kubernetes. We are going to use a one-step process for this flow because it's a very simple use case. The first thing that you do is to do a Motikube translate. And then you point to the samples slash E2E flow folder. Once you do that, what it does is it goes through each and every file and then tries to analyze each one of them and understand them and then tries to interact with you whenever it has a doubt. So right now, it is creating the plan for you internally and then it'll come back to us when it has summed up. So now it says that it has identified two services and is asking if they want to translate all of them. And then it says that, okay, in both the services, I can translate using multiple containerization techniques. It might be Dockerfile, S2Y, or cloud native build back. So in this case, let's just go with the Dockerfile. And then it's saying, what do you want to create? Do you want to create a YAML files or do you want to create head charts or KNATO artifacts? Now let's go with hell. And it's asking what kind of cluster you're going to deploy to. Do you want to deploy to your OpenShift cluster, your Kubernetes cluster, or a particular flavor of Kubernetes? May it be IBM, Kubernetes or OpenShift or Azure, AWS, or GCP? Or you can have your own custom ones, which we'll look at shortly. And then it's asking, what are the services you want to expose externally? So you can select whatever you want to expose externally. Then it's asking where the registry is. I'm just going to point to the registry us.icr.ivo. And then it's asking the namespace in which you want to deploy. Going to do the M2K demo. And then I'm going to specify my full secret. And then it's asking for the ingress hosting. So I'm going to grab that from the cluster I'm going to deploy to. This is the ingress subdomain of the IBM Kubernetes cluster I'm going to deploy to. Once I mention that, it's asking for information on your secret by default. If I don't give any secret, it will create a HTTP ingress endpoint. If I give a secret, it will create a HTTPS endpoint for us. So I'm going to go with defaults for everything else, including the CACD pipeline. And now it has created the artifacts for you in the folder called myproject. Since we did not override the default project name. So let's see what's inside this myproject folder. So it has created multiple artifacts. There's a readme which guides you on the next steps. But essentially there are many scripts in the base level, like build images, copy sources, and even a Docker compose file and a few others, which we can use to install. In addition to that, it has created a handshot for us that you can see here. There's a chart.tml, there's a readme, and then all the artifacts, the deployment artifacts, the service artifacts, the ingress, everything is created, including a values.tml, which parameterizes most of the default artifacts. In addition to that, it has created a sample operator for you, and which is a Helen based operator. Also it has created a CICD pipeline for you. This CICD pipeline uses this containerization scripts. Since we mentioned to Motocube to create Docker files for us, it has created a Docker files and scripts that can build the images using the Docker files. So now let's look at, take this artifacts and try to deploy to Air cluster. So the first thing we will do is go into the folder and look at all the artifacts. The first thing that we need to do is to copy the sources. So the containerization scripts that you saw are all Docker files and stuff, but it requires a source code for it to be able to build. So let's do the copy sources and then point the source to the folder that we gave as input to Motocube. In this case, it's e2eflow. And once we do that, the files are copied. The next thing that we do is to create the images. So we call build images.sh, which builds all the images. And then the next thing that we do is to push the images to the registry. Since while Motocube was translating, it asks us all the information by default, it pre-fills most of the information so that you can push the images. So now it is pushing the images to us.icr.ivo slash m2kdemo slash golang, and then in Node.js2. So once it finishes pushing the images, now we will try to deploy this into the cluster. So let's see how that process goes. Now that it has pushed all the images, the next step that we are going to do since we have created a Helm chart is to install it using the Helm install command. So here we are going to do the Helm install and let's see what happens. So now it is trying to see whether there is a project by the name myproject.exe. If not, it's trying to create one and it's installing to the cluster we just saw in the Chrome. And it has exposed the application in the form of this ingress. So we will just copy this. And then we'll try going there. So it has taken up, it will take us to the base of ingress and then there were two applications which were put into golang slash golang. So let's, since we did not create a TLS secret, let's go to the HTTP endpoint, okay? And for Node.js2, let's go to the HTTP endpoint. And once we go there, you see the application. If you had given the TLS secret, the HTTPS endpoint will also be working. So what we just now saw is take two applications which you normally deploy to an VM or a CloudFundly and deploy it into Kubernetes in a matter of few seconds or minutes. Now let's look at a few other source platforms. One of the more common use cases is a Docker Compose. When you develop your application, you generally create a Docker Compose file and then you decide to deploy to Kubernetes. And then you have to rewrite all of them into Kubernetes. In this case, let's try to take the Docker Compose file and then deploy to Kubernetes. For that, we are going to use this sample over here which is a dockercompose.yaml file, which has a few services. Let's use a single command flow again to translate the Docker Compose file. I'm going to call motorcube translate and then point to sample slash Docker Compose folder. It could have multiple Docker Compose files. In case of production, you might have multiple on them. Motorcube has a capability to go through all the Docker Compose files and combine them and give a holistic view for you. Now it is going through the files and trying to help you with the translation process. Let's see what it comes up with. What are the services it has identified? It has found three services, API service, a Redis service and a web service. So it's asking whether to translate all three of them. And it's saying that these are containerized once, the images exist. So you can reuse them. They want to reuse the container images. I say yes. And then I create, it's asking whether you want a HelmChart or direct YAMLs or K-NATO artifacts. Let's go with the YAMLs. And then it's asking for the cluster type you want to deploy to and the services that needs to be exposed. And let's go with the defaults for everything else. And once we have that, it will create the My Projects folder again. Let's look at what is there in the My Projects folder now. Now, since this is a pre-containerized environment, the container files are already there and only the Kubernetes artifacts are created. It has created deployment artifact for you, a service YAML, the ingress and for the different services. So this is a quick way where you can take your Docker Compose file and within a few seconds you can have all your Kubernetes artifacts required to deploy to your cluster. The next flow that we are going to see is a combination of multiple language stacks, the Java, Go, Python and stuff, which needs to be containerized and then put into Kubernetes. Let's see a quick example of that. For that, what we are going to do is to, we are going to take this language platforms folder, which has multiple applications which are going to be translated. So for this, we are going to use a two-step approach. The first is a plan phase and for plan, we are going to input this as a folder. And once I give this language platforms, it's going to go through each and every file in there and combine them and trying to make sense of them and then come up with a plan for you, which you can then curate. In addition to the default integrated containerization techniques, MotorCube also supports extending the containerization techniques. In this source folder, there is an extended containerization techniques that is being part of the source folder. Here, you see this Java gradle where there is a M2K-DF data and then a Docker template as Docker file. So MotorCube can understand this folder and treat it as one of the containerization techniques. It will use this containerization technique to containerize the different folders wherever it is facing. We'll see in this example how that is going to work out. So it has created a M2K.plan for us, which is essentially a YAML file. So here is a file that we are going to see. So it has multitudes of options. It has different services and the different information it has collected and the different ways in which that application can be containerized. To help us with the containerization process, what we are going to do is to use the translate command to take us through the curation of the plan. For that, I'm just going to do a translate minus C, which is a curate for the plan. So it asks what are the folders that needs to be containerized. In this case, these are the applications I need to containerize, so asking me options. And it's asking, what are the containerization techniques you want to support? And for each of the folders, now it's asking, for each of the folders, I know different ways of containerization. Which one do you want to use? For example, for Node.js, I can create a Docker file for PHP, I can, for example, chose S to Y. For Python, I can use cloud-native-build-pack. And then for Ruby, I can use a Docker file. And for Golang, I can use Docker file. For Java gradle, I can use a Docker file. Now, when I chose Docker file-based containerization for Java gradle, it is giving me two different ways in which it can create a new Docker file for us. One is using the in-built technique, m2k-assets-docker-file-slash-java-gradle, or it's pointing to a folder in our source, which is Java gradle folder that you see here. So it has automatically used this to try and containerize the folder. So let's go with that. Similarly, you can choose the other options too. And then you can point it to the cluster that you want to deploy. And then you have the different services that needs to be exposed. Let's go with the default values for everything else. Once it's done, our containerization scripts are all ready. So let's look at the output. So here, if you notice, what it has done is it has created the YAMLs for us. For each of the folders and the services is identified, it has created the deployment artifacts, the service artifacts, and the ingress as required. And also created a simple Docker compost file for you so that if you want to test the images locally, you can do that. You can follow the same approach that we used for the E2E flow to take these artifacts and deploy to your cluster. So you can see that it has created Docker files for the applications which we told it to create a Docker file for. And it has used S2I for PHP, for example, and cloud-native build back for Python. And the build images command is intelligent enough to use the right ones and build the images for you and follow the same process. Also, it has created all the tecton artifacts that are required if you want to use tecton as your CACD pipeline. So what we just saw is a very simple way in which you can have a very diverse source environment and take them all and containerize it and deploy to Kubernetes. Now let's look at a few more flows. One of the other flows that is of interest is putting between different versions of Kubernetes. For example, 1.9 version of Kubernetes has a few artifacts which are in V1 beta one version, for example, like deployment. Whereas 1.17 might support V1 version of deployment. And when you try to deploy the V1 beta one on your cluster which supports only V1, you will get an error. Motocube can help you with that. In addition to that, it can also help you change flavors. Let's say you want to use the best features of OpenShift. So instead of creating a deployment for you, you want a deployment config. Motocube can help you with that. So let's take some Kubernetes artifacts and convert to OpenShift. For that, what we are going to do is to use this particular folder which is named Kubernetes to Kubernetes. There are some API applications with a deployment as service and your Redis with a deployment and service and then your Ingress deployment and service. We are going to use a single step approach to do this translation. Before that, from the previous flow, let's remove the plan file which was created. Now let's do Motocube translate minus S and then point to the samples. So now it's going through each and every Kubernetes artifact and trying to understand what is there and what needs to be translated. It has identified three services. Let's go with all three of them. We use them and let's still say we want a Helm chart out of this and then we want to target OpenShift and what services we want to expose. Let's expose the web alone. And let's go with the defaults for everything else. And once it's all done, it has created a My Project folder. Let's look at what's in there. So it has created Helm chart for us which has a deployment config image stream and a service YAML. And it has also created a route instead of an InClust because we are targeting an OpenShift cluster. So with a very simple approach, you can move between clusters quite easily with the help of Motocube. The next flow we are going to look at is a CloudFoundry translation. Here we are going to use the sample called CloudFoundry over here. There's multiple artifacts as you can see over here. Let's start Motocube plan phase on it. Before that, we are going to use a three-stage process, the collect tool. So Motocube collect is nothing but you just do a Motocube collect in your terminal and it can automatically understand the CloudFoundry instance and the Kubernetes instance in your contact and collect information for you. We have done this before for us. So it has created two files, the cluster.yaml and a cfapps.yaml. Let's look at what that information is. So in this case, I'm going to first open the cfapps.yaml. So in the cfapps.yaml, there are information like the build packs that are supported that is used for running this particular application, the memory, the instances and the ports that are supported. If there is an enrollment variables, it would have collected that information too. So this is the runtime information of the particular application that you are trying to translate. Similarly, it has also collected some target cluster information. Since Kubernetes cluster was in context, it was able to collect information about the storage classes supported by it, the APA versions and also the priority between them. You can always edit it to give hints to Motocube on which APA versions to support. And in addition to that, there's a manifest.yaml which tells the application how to deploy it, which we'll back to use and stuff. And in addition to that, there are source folders. Let's take, start the next phase on this, which is the plan phase. While we start the plan, it's going to combine the runtime artifacts with the source artifacts and all the information that it can understand out of it and going to come up with a plan for us. Let's look at what is in the plan. So it has created a plan for us. So let's get that plan. So in the plan, here you can see some information. There were different options. And one of the options is to containerize the application using cloud native build pack. And it's saying I can containerize it using two different ways, a Cloud Foundry image or a Google image. And then it is saying it can use the source artifacts and also the runtime information and combine all of them and do the translation. In addition to that, it has pointed to the target cluster, a cluster.yaml, which it can deploy to. Now let's invoke translate on this. So as soon as I select translate, let's say I'm interested only in cloud native build pack. There are two images on which I can target. And then what kind of artifacts are required, what kind is cluster. Here you can see there is a new cluster type that is listed cluster.yaml, which is your custom cluster, which is that you're collected. Once you do that, it can create all the artifacts for you in the My Project. So let's look at what is there. So here it has created all the deployment config. It was an OpenShift cluster that we collected. So it has created the deployment config in a stream root and the service for us. So that is a simple way where you are able to combine multiple information or like runtime information, source information and even the target cluster based information and do a holistic translation. Similar to all the command line flows, we can also use a web UI for do the translation. It has all the capabilities that we saw in the command line tool. For that, all you need to do is to head over to the Motocube UI repository, clone it and do a Docker Compose app. So let me take you through the simple flow of that. I'm going to go to the Motocube UI repository and then I'm going to use this Docker Compose yaml and translate it. So I'm going to do Docker Compose app. And it starts the Motocube components that are required for the UI to run. So once it's up, it will be available in the 8080 port, which I'm going to go to now. As you can see, there is a pre-existing project called demo. This is because there is already an existing workspace folder into which there was a pre-existing project. So you can just start it in that same folder and you will see all your previous projects listed. In addition to that, let's create a new project called kubecon and let's go to the details of it and let's upload our first asset. The first asset that we are going to upload is the Docker Compose file that we got. It is in the folder called UI local workflow and we are going to upload a zip diversion of that file. And as soon as we upload, the file is available there and we create the plan and it is going in the background trying to create the plan for the Docker Compose file we just selected. Once the plan is available, we can see the different services that for now, let's look at a previous project for which we had already created the plan. If there are more services, it will be listed that and which we can again curate. And once that is done, we can go to the target artifacts tab and click on translate and go through the same QA flow that we just went through from the command line. All the features that we saw in the command line tool are available through the UI tool. And this we can download with as a zip file. That's a quick overview of motor cubes capabilities and the demo flows. Now, let's look at the different requirements for motor cube depending on the source platforms. It can gracefully scale, they're depending on the data available. If there are manifest files alone available or source code is additionally available or untimely instances available in case of cloud foundry, you can create the right artifacts depending on what is available. Similarly, for Docker Compose too, depending on the Docker Compose files available or if source code is available, it can create the right destination artifacts. If there is only source applications that needs to be containerized, motor cube can handle that too and can create the right artifacts. With that, I would like to hand it over to Amit for a quick overview of a case study we had with motor cube. Thank you Ashok. So folks, here's an actual case study where we took move to cube and ran it through, ran it on an actual application. In this case, the application was running on Docker Swarm. It had a variety of containers. For example, it had custom containers written in Java. It had custom containers written in Node.js. And it had some middleware containers, for example, MongoDB and data power. And we wanted to actually do the translation over to OpenShift, get the application running on OpenShift and then compare that with the process where everything was done manually. And to prove to ourselves whether we actually do see productivity gains. So the table here actually shows you some of the results. It shows six task groups ranging from discovering relevant assets, translating artifacts that are easy to translate. It's the one that are more complex to translate, adding in features and best practices for OpenShift, customizing and then right sizing for the configuration and deploy. Now in this case, the manual activity was actually an estimate. And the move to cube activity was actually done in practice. And we saw a very nice 9x productivity gain for this roughly 100 container scenario. Of course, this can go up and down a bit depending on your source platform and the skills of the user and specific scenario of the user. But this was a great proof point that really convinced us to go further and build this tool out further for a variety of other platforms. And this is our closing slide. Thank you for your time listening to us. Here you see various links out in the community, all housed within conveyor. So please head over to any of these links so you can contribute. You can try use the tool, you can contribute and submit pull requests. We'd love to have more participation from the community.