 Hello everyone, welcome to the app modernization demonstration. My name is Hank Scorpio. I work for the globex corporation I'm glad you could join me today. We are a global conglomerate We have a retail business that is one of our key businesses to our overall strategy as a company And I want to tell you a little bit about the history of one of the applications inside of our retail business There's been a group of folks that have been knocking on my door emailing me From the conveyor community that wanted to hear about this and kind of our history of our retail application Where we are today, so I'm excited to invite them into the the room here and into the meeting and Tell them the history of this and see what they can do to help me modernize this application So this retail application is a typical end-tier application started in the mid-2000s. It's a monolith It runs on VMs on today on VMware vSphere. We've got a number of challenges with this Our code commits take a very long time We had a code deployment that brought down the entire system once when things fail It often takes us hours to fix them. It's leading to long downtime and during peak times We have trouble handling transaction volume So if you're familiar with the DevOps metrics that you want to measure We're failing in all of them with this with this application in the mid-2000s And so we decided to begin to strangle this monolith There's four services we identified the gateway service the customer service the order service and the inventory service inside of this application and so what we did is we began to strangle them and at the time We heard of this platform called cloud foundry and so we decided to Basically strangle out the gateway and order service and develop a new modern front end That would make our application more customer friendly as well. And we did this all on cloud foundry. We did it using spring boot Unfortunately, we realized that cloud foundry is not the best choice of platform given the momentum Momentum that Kubernetes has in the ecosystem So we now you know, we started to realize that we didn't want to develop any more applications on cloud foundry And we also started to discover new runtimes like Quarkis that were even more efficient than spring boot For cloud native application development. So what we did next was we took this Inventory service and we split it out into a Kubernetes environment The nice thing here is that we could actually bring our database on Kubernetes because Kubernetes could handle persistence we developed the inventory service using Quarkis and And we were you know, we thought we were moving in the right direction Unfortunately, the big challenge we have with a Kubernetes environment is that we've manually deployed This inventory service our development teams kind of have just developed it and have a manual way of deploying it And in order for us to promote this into our production Kubernetes clusters, we're gonna have to Replatform it we're gonna have to automate the way we do this They're really trying to embrace a GitOps method for our development and so they don't want to just have manual deployments They want to be able to redeploy in an automated fashion So this is where we are today. It's really challenging our customer service still remains on VMs It's slowing our deployment frequency So we still can't deploy any faster than we used to because our customer service is really lagging behind in the ability to deploy faster Our services on cloud foundry need a path forward and we need to move them to a platform that has a strong and reliable future and Our inventory service needs to be automated so that we can deploy it into our production Kubernetes environments in the future And most of all, you know, perhaps is that we're maintaining three platforms and this becomes really difficult You see one of our employees there who's having a tough time managing all three platforms at once So I've invited the conveyor team here because ultimately what I want to get to is this I want all of my services running on Kubernetes So that I get the benefits of it horizontal scaling automated rollout and rollback impacting all those great things That Kubernetes gives me I want to leverage a get ops model to Decrease my lead time for change meantime to recover and change value and increase my deployment frequency and I want to simplify my operations by putting all of this on a single platform that's easier to manage for my ops teams and Then I could start to plug in cloud services and all these other things And do even, you know more fancy cloud native things in the future with this application to increase its value So I've invited the conveyor team here. This is my current retail application Ramon you are telling me how you guys can help me start to modernize this app. Can you can you tell me more? Yeah, of course So first of all, I think we need to run an assessment on all the different services within your application and It looks like these legacy customers component will be a bit problematic So I think we should run an analysis on that one to try to find out what could prevent it from running on containers and once we have found out everything start with the refactoring of the application to adapt it to a more crowd cloud friendly architecture Nice. Nice. That's great. So once we do that, what am I going to do with this database though? Miguel you were telling me that makes Yeah, I mean, there are some more clothes that are not intended to be migrated straight ahead and that Modernization could take longer, but you could bring them to Kubernetes Either way by moving them as virtual machines So you just take the database in the virtual machine as it is So you just deploy it as a virtual machine on the target and then you could leverage all the all the features that Kubernetes provides for virtual machines because they are just another ordinary subject Let's see that it blends it will blend a lot better than just CIT By blends and into her in your github. Thanks. Thanks. Yeah, that's good because I'm scared to do anything with that Database so it scares me to death to touch it. It's very old. Um, what about my cloud foundry? apps Yeah, so you can take all your cloud foundry apps and use the conveyor motor cube to Analyze the code and create the right artifacts for deploying it into Kubernetes Including your cacd pipelines. Everything will be modernized with motor cube Oh, wow. All right. Thanks is shook and finally my my inventory service. What am I going to do here? So for that we'll use the crane project which will help you remove everything that was art coded in your deployment and So we'll clean this up push that to get and get this fully automated So that you can really deploy this app and promote it from dev to q2 production in an automated fashion Thanks, Marco. So we've got Assess analyze refactor we host replatform All these things onto kubernetes. I'll tell you what I want to believe you all but it sounds too good to Be true. I think I need to see it to believe it. So maybe we can get started Okay, definitely. So Allow me to start with the assessment of your portfolio with a tackle. So let me share my screen And pray to the fedora gods that this works It seems to be working Great, okay So the entry point for the tackle tool is the application inventory and what we will intend to do with the application inventories offer organizations a way to Manage their their application portfolio and have a holistic view of this application portfolio So it looks like your Architects were proactive enough to load up the your portfolio within the application inventory. So we can get started here Uh, one of the cool things about the inventory is that it allows you to classify and manage your portfolio in any way you might want First of all, we have the notion of business service. So right now we have all all your different business services on on screen But we wanted to focus on on your retail application. So we can filter the retail applications just like this And talking about managing the portfolio and classifying the the the applications One of the most exciting features in the application inventory is an extensible Tagging model that allows you to classify your portfolio in as many dimensions as you might want So for example focusing on this legacy customer management application We were discussing before if we expand we can see a series of tags in here For the moment we we have used the tags with the concept of technologies that each application has So we have java tomcat oracle as I said the tagging model is extensible. So we added another Tag type related to the different custom frameworks. You might be using within your application So it seems like this legacy customer management application is using a custom configuration library And I get the feeling that this might be the problem that we need to solve To to make these applications suitable for containers Yeah, no, that's great. It'd be great to look into that custom configuration and figure out more learn more about it Okay, so let's start the assessment so we can select the application and getting it straight away into the assessment We already filled out the assessment beforehand. I know you're a very busy person, mr Scorpio, so I promise this will be fast great, so If we continue the first thing that we are presented is the list of stakeholders that have been involved in the assessment There's this Homer seen some guy in here. I think you might be familiar with him Yeah, yeah, he was very involved in this application Okay, so moving on Well, the way we see the assessment is a questionnaire driven assessment So we we are presented with a series of questions of different aspects related to the application landscape And by that I mean technology application life cycle management Architecture all the different concerns that might have an impact on on on the application And the idea behind the assessment is to find out how suitable the application is for for containers Okay, so this gives me kind of a general idea of what might be Challenges that I might have in containerizing the customer service app exactly out of the This would have been really helpful with the rest of our services. We strangled out. So it's good. We have it now Yeah, absolutely. I'm happy to be here to help So yeah, what what the tool does is based on your answers It detects a series of potential risks that might prevent the application or present some sort of threat for the application To to run on containers. So let's let's skip the the assessment and go straight into the review to find out what What these risks are so I can save and start with a review process And here we have a high level diagram of the different risks that have been identified out of my Of my answers, but we can get down to the detail. Yeah, we should check out that high and medium. I'm guessing. Yeah Absolutely. So we have the least of risk identified if I reorder this thing I get the high and mediums presented first So there seems to be some problem with the way your application handles service discovery and that makes sense because it is it comes from a legacy platform in which Static IPs and things like that are used and that is not very cloud friendly So that's that's one thing The other risk that has been identified is the the maturity level in your organization related to containerization process But I guess that's why we're here. Yeah, absolutely And finally we have detected that you have some some some trouble with how the application is is being configured So there seems to be multiple configuration files in multiple file system location And that is an anti pattern when you're talking about cloud cloud native and cloud friendly applications Yeah, I seem to recall some developers had been loading configs from local file systems. So that that makes sense Yeah, absolutely So it seems like this custom library that we already detected has some sort of responsibility here And we need to figure out what to do with that maybe replace it with a more straightforward approach or something like that So once we have identified the different risks We we have enough information to make an informed decision on what would be the the best migration strategy for this application So if we go up we're presented with a six rs Or or or the six rs format for amathon the the standard for the different migration strategies to follow In this case, we will choose refactor since we need to perform some changes in the source code for the application to be more container ready and and and cloud friendly cloud native I think removing that the library is A small effort. It's if it's just about that, but there seems to be no other risk in Indiana in the assessment I know this this application is very critical for you. And I think it should be our top priority So we can submit the the review and everything gets stored for for later consumption. So now We have this This assessment we have some sort of clues of what needs to be addressed here The next step will be to do an analysis and being able to detect on the actual source code What are the things that are preventing us from from doing a clean migration towards? Towards cloud. So we're going we're going from kind of a high level disposition down into let's look Let's figure out what we need to actually change. Yeah, exactly. That's that's that's the idea behind this So we have an analysis piece for tackle on development right now We're bringing some other project into the tackle umbrella. So for the moment, we will have to switch to another tool In the future, we want to have everything fully integrated in the same fashion Like we have the assessment. So we click on application and click on assess Well, it will be the same with the analysis. So for the moment, we need to switch to another application And and this is the the analysis of it the first step is to choose What is the migration path we want to follow? So I'm guessing you still want to keep the tomcat runtime that you have right now for the application Yeah, yeah, I'll leave it like it is for now. I think just to move forward quickly Okay, so no need to choose an application server We need to do a containerization process, of course I would for I would do a sanity check with the linux migration path just to make sure there are no Windows static paths in there from other versions of the application And I I know you you have some Problems with the licensing with the Oracle JDK and you will want to get rid of that Yeah, that would be great if while we're moving we could move to open JDK be wonderful So we choose that that migration path as well. So we have containerization linux and open JDK Maybe in the future we could actually use that quarkus bit with our springdude apps as well for Absolutely, if you want to do this this this modernization step I think you you already have used this somehow for these quarkus applications. You already have so Maybe your architects are somehow familiar with this. Yeah, so Moving on once we have selected the the migration path we want to follow Then it's time to select which are the packages that we want to analyze so Uh We remove everything we want to focus just on the business classes from your application and and avoid the libraries So we will choose this conveyor packaging It's funny because use the same packaging that we do so it kind of feels like this is some sort of staged a marketing demo But it isn't no absolutely the buildings behind me are real Okay. Yeah. Yeah, it looks like that. So once we have selected the the business packages We want to analyze we move on Next step is the custom rule. So we we've been discussing these Discussed some configuration library So we already had some conversations with your architects and they already told us about this library So we know how to find it within your code And we already came up with with some strategy to replace it with a more Straightforward standard approach to enable externalize configuration in Kubernetes So like something like config maps or things like that Exactly being able to use config maps and and secrets. So we need to Sort this thing out within the source code itself to enable that possibility So in order to detect this the the usage of this library in your source code We have come up with a custom rule for for the analysis bit So I will upload the custom rule we develop jointly with your architects So this analysis component is a rules engine that is extensible. So we came up with another extended Rule and we added to the rule set for the for the analysis. So we we upload it We enable the rule and then we're good to go with the analysis. Oh, great. Okay Moving on. We won't be using any custom labels We also have a Gasillion of Options to fine tune to further fine tune the analysis, but for the moment, we will stick with the with the target Okay, we're moving on And last step is to review that we didn't make any mistakes, which we didn't so we're good to go and start the the analysis Okay, so this is going to open up that that application archive look for Any of those patterns that match and then highlight them for me. Yeah Exactly, that's that's exactly what is that it does is some sort of static analysis Using the binaries decompiling and then analyzing the source code. So since we are a bit a bit tight in in timing Air VC person. I already run the analysis beforehand. So I have the results here These these are the reports that are produced by the analysis. So we choose our application Then the most interesting part of the analysis is the list of issues that have been detected and by issues We mean the things that are preventing your application to run on the target platform, which in this case is Kubernetes Okay, I see the legacy configuration the hard-ported ip's in the file system and it caught all three that all those Yeah, yeah, that's it. Exactly. So uh, this legacy configuration. That's an occurrence of the custom rule we developed So it presents us the number of incidents within each one of the classes of your application and provides a series of hints and and links to documentation on how to actually Fix this this this issue So if we click on the class itself We can navigate straight into your source code source code and see where the the offending lines are have been detected on on on the analysis And that that that this could be pretty useful to work with the changes But we're working on a on a web console here. So we cannot actually do the changes Yeah, so how do my developers then actually go through and change this make this change? Can we refactor it? Uh, yes So one thing could be keep changing from from this window to your id But that that doesn't feel very productive to us. So that's why we developed a series of ide Plug-in for the most popular id is out there So my id of choice is of choice is ps code and I already have the project open in in vs code I already installed the the the the plugins. So Once I have my project open I can go into the the plugin view and Do the configuration of another analysis? So I chose the exact same target migration target, which is cloud readiness linux open jdk I also use the the custom roles and once we're done with this I can click here Well, I can click here and run and the analysis will run and we will get the the results So I already did that. I have the results in here So we can access the exact same report that we have on the on the web console to be consumed locally But again, we we agree that this is not the most practical approach to follow So we also have a list of issues that have been detected on the on the application So if we navigate towards this persistent config Class and open it there seems to be two hints If I click on here, I can see what the offending lines are if I need any more detail I can hover here and see the description or I can see the details So I can basically get the the neat details on what needs to be changed And the actual source code side by side and start performing the changes which will be pretty pretty straightforward and and easy So after doing this after doing all these changes My application would will be ready to be deployed in containers But we we need to go to the next level once we have our source code ready to be in containers We definitely need to generate all the different Manifests and images images for this application to run in Kubernetes And that is something that moved to queue that so we have my colleague a shock here that will show you how this thing works Oh, wow, that's great. Thanks. Thanks, Ramon. So basically we've taken me through kind of assessing my entire application portfolio Then analyzing the customer app Then understanding what needs to be changed and making those changes And now you're saying that We could use a tool called move to cube inside a conveyor to actually generate all the Objects and manifests and things I need to deploy on kubernetes. Is that right? That's it. Okay, cool. So so a shook You're from what I understood you you're going to focus on the cloud foundry pieces But this could just as well move to cube could be just as well used to to create the manifests and objects for the customer Customer's service. Is that right? Absolutely. James. So as long as you have you have your source code or runtime information What do you can use that to create your destination artifacts? Wow, that's great. That's great. Okay. So let's talk about my my spring boot apps running on cloud foundry How am I gonna move those over? Absolutely. So you have a couple of spring boot apps the gateway service and the order services and yeah Node.js application of the front end. Let's look at them and see how we can translate them So So let's uh, have a quick demo of motor cube with the ui. Um, akash if you can share your screen Sure Okay, so the first thing that we are going to do for motor cube is to look at the source code, right? So here is a source code of motor cube. We have the e2e demo apps. So let's look at what we have that So if you look at the code that you have the pattern thread react seed, which is the front end app And then you have your back end applications In the rh oa oar microservices demo, you have your orders and the gateway Services, so what we are going to do now is to Take a zip of this folder. We have already zipped it in e2e demo apps dot zip And we are going to give it to motor cube to do the translation. So let's head over to the motor cube ui So this is the motor cube ui What we are going to first do is to create a project We call demo and then we are going to head over to that project And the first thing we need to do is to upload the source code. So we upload the zip file Gorgeous, that's the source for my gateway and orders app and everything actually but gateway and orders too. Yeah Absolutely. Yes. Okay, great The second thing that we will be uploading is the configuration So there are some enrollment information like your ingress and other information We need to give it to motor cube so that you can create the exact right artifact So we are uploading the configuration to motor cube and you can see both of them are there in the ui The next step that we will do is to do the processing Here what it will do is it will go through all the files and use the configuration try to understand What are the services there? So to save time we have done a pre processing of this in another project. So let's just head over to that Okay, so now you can see that The plan file has been created which has information on your different services and The different folders in which it found it. What are the configuration information about your The next step that we will do is to use this information to do that transformation So let's try that out For that, let's take one start transformation And then it uses the information in the plan file the source code the conflicts that we gave and It will do the translation and give you the data Okay, so you're going to give me my artifacts that I need Exactly So you can see that the transforming is done That is Let's download the artifacts and see what is that So it will give us a zip file. Let's just unzip that file and see The artifacts motor cube has created for us So we are going to open the Unzipped file in our VS code to look into the contents So if you open the Cf to ocp folder You have a source folder, which is your initial source that we uploaded and Into which it has seeded some more files. Let's look at what it has added It has added the documents to each of your services. Oh great. So if created the docker files and for how to generate those services here Exactly. So you can create your docker images out of it using these docker files And then what it has also created is it has created some scripts which help you test locally We have only used these scripts to build their images and push your images to registry and The next step would be to deploy Okay, and under deploy is all my my artifacts necessary to deploy on top of kubernetes Exactly. So you can see your ingress your deployment the service and All the armors that you need to deploy Um for all your services. Wow. Yeah, that would save me a bunch of time I mean, I would have to go do these all manually by hand for everything running on cloud foundry. Yeah Exactly. And also if you look at the amazon's parameterized folder, it has a bunch of additional Helm charts customizing armors and open shift templates of these files. So if you prefer to use any of them, you can use that Now let's deploy this app into kubernetes So let's head over to our terminal and let's check whether we are connected to the cluster by using kubectl version So we are connected to the kubernetes cluster And then what we will do is We will now push the yaml To do that, we will just do a kubectl apply and we will give the yaml's folder that we had Gotcha, so now you're just applying the yaml that was created by move to cube and this should deploy deploy our Cloud Foundry apps right on kubernetes. Is that right? That is correct So we have already built the images pushed the images and now we have deployed all your services It is creating all the deployment servers and ingress Now, uh, let's look at whether all the ports are up And uh, then we can see our app running So now that we can see Yeah, all the ports are up. Now. Let's get the ingress and check with The ui So we have the ingress now. Let's just check this ingress in the ui. Let's get here And we are able to check Have your front end app running with connections to your back end Wow, that's awesome. That that That's amazing that really that automation really helps make things go a lot faster to be able to really deploy everything Yeah Now you are able to see um locally, right? But you want might want to automate your cacd pipelines. So If you notice you have the cacd artifacts to create it the teclon artifacts that you can use to Automate your build process Hmm. Nice. I could put those right into my pipelines and being on my way absolutely cool Great. Thanks ishuk. This and and akash this is really helpful to see how we could uh, You know use move to cube to move our spring boot applications over Let me go ahead and Bring this back. So just to kind of reiterate where we're at. So we had Um, we just moved all of our spring boot apps over to Uh kubernetes now. So now I have my front end my gateway my order service and my customer service all running on kubernetes But I do have a question here We we we kind of forgot about that database with with the customer service. So how am I going to move that over Miguel you mentioned moving that over with forklift. Is that something we can we can talk through now Yeah, for sure I mean there as I said before you may have like in this case a database or some other workloads that You would like to To be able to move as they are into your kubernetes environment. So, um, let me share my screen. I'll show you Yeah, yeah, I'm definitely I get it. I'm really nervous to move that over You know do anything to it because there's a lot of old plcquel in there. We don't really know what's going on So getting it over on the kubernetes first and then figuring that out would be a lot better Yeah, so we have this tool forklift that is going to bring virtual machines from your environment from your vmware environment Into kubernetes using kubernetes kubernetes a capability that helps you run bms in in pots just like container in pots But but with bms So what we have here is that we have deployed forklift and it has configured Provider that is kubernetes as you could see here. So we have source and target So this is going to be the target we need to add the source We can simply add the source by adding vmware And what we're going to do is just provide the name the IP address or host name and credentials to access it And and a fingerprint to ensure that we're connected to the right place and there's no money with the middle attacks So right now we are adding this Target to see that vmware tab has shown here and what we have right now source and and target providers you see and It has loaded everything we can see here the host. So we have been scanning your vmware environment Lightly to get all the data and right now we are ready to to perform a migration So to do that we do a migration plan And we create the plan and we give it a name So we're going to focus on retail is what we are migrating right now from what you told me. So let's migrate retail And we select the source provider in this case the center and the target provider Which is automatically configured and say that host which is your Your cover net environment and we can select the namespace. I'm selecting here globux retail although we could create it from the menu And go next and then I have to select the vm that we want to migrate. So I select this this cluster And then it will gather all the information about the cluster and I'll get here You'll see all these beyond and they're reviewed So just in case they have something that will require some manual intervention in this case The name of the vm is not right and we don't have change block tracking. So we cannot do word migration Oh, that's really helpful. So it kind of guides you to tell you what to expect Right, like you're gonna have to make some changes to the main here and you know, you can't do a warm migration. Okay, cool Exactly. We don't want to have issues during the migration. We want to warm as early as possible So here we have The retail database that we want to migrate and we have also some vm's like the business rules for For bacteriological word for and the esp doom for doom state. This is this is helpful for another day Yeah, we'll need more of those for now. Yeah, you know, we'll keep those out for now So let's focus on the retail database We click next and now we have to establish an equivalence between the networks in the source and the networks in the target So I'm going to create a map an equivalence So I'm going to use the vm network that the tool has Detected in the vm that is being consumed by by the network interface and I select the target network So I use the port network and I can save it to reuse it in the future So I could save it as network map and go next Same thing happens with the storage. We need to establish an equivalence between source and target storage So it has a data that we are using an nfs data store in the source And I'm going to use a standard is also nfs in the target. So they are equivalent if you have you happen to need Faster storage in the source. You should select also an storage class in the target That you have to have pre configured in your environment first So I could save it click next I am going to use call migration because we don't have change but fucking but there's a chance to be able to do more migration Which copies the data before shutting down the vm And once the data is copied we could shut down the vm copy only the changes that were applied to the disk Reducing the time required as downtime and increasing the number of vms that you could migrate in one Intervention window. I like that. Does that sound like helpful? Yeah, absolutely less downtime is better Yeah, so we have also hooks in case we want to automate changes. We are not automating changes in this case so we just Complete and finished and the plan is ready to start So let's say that the intervention window arrives. We could click start and the migration will start going on So right now The tool is connecting to the source to game where it is using vddk Which is what the backup tools use to gather the data So if it works for backups, it works for the tool and we are transferring the disk as you are very busy, man I already transferred one vm. So let me show you I'm using here open shift so we could show you in a new eye and But all the tasks that I've shown you could be done via apr cli on Kubernetes So we are important and I have another vm already important So what I'm going to connect to this vm. Okay, so I can connect to the console And then As you will see Nice to see my oracle database running on on kubernetes. It's great. Yeah. Yeah. Yeah, it is it is exactly right there If I do not mistype it So I'm going to to become the oracle user and I'm going to check for the connections Right, we'll have 200 connections and we need more we need 250 So that's one thing we could do in kubernetes that is very interesting. We have this deployment. Sorry This config maps, okay, and I created a config map that is very Very small and straightforward and what it does is that it changes the number of connections for me. Oh, wow Okay, so all I need to do Yeah, so now my developers can make changes to this vm just straight through git. Yes Exactly. So I just go to environment associate this config map I have added one service to the vm to pass the config map And the only thing I need to do is to go and restart the vm And the changes will will be applied Wow, that used to take us hours to get our our dba's to make changes and all that So now you're saying it's just all pipeline days, right? It's all pipeline base. It could be used as another Kubernetes object the same way you will do it for a container So awesome. I mean we could wait to check it. Um, but um, the thing is that once the vm is started we'll see that the number of of the of I'll say connections has increased to 250 So this way is how we can move or as many VMs as you want Think that this tool is also intended to be able to migrate hundreds of VMs at once That's great Awesome. Thanks. We all this is super cool to see how we can, you know, basically move this vm into more of a modern you know modern Modern way of working right now. I can actually use it with uh config maps and change make changes to it a lot easier so Yep Cool. Let me uh, let me just see so just to kind of make sure I'm on track here. So we just moved that vm over That oracle database. So now we have The gateway orders and customer service our front end the oracle database everything running on kubernetes Except for one last thing marco, which is my inventory service So how how am I going to do this? Remember this was kind of manually deployed and I need to figure out how I'm going to redeploy this into uh, this new kubernetes cluster bring the state along but also Automate it with some kind of get ops flow because you know, I don't want to leave it the way it is That's correct and for that we're going to use crane to just give me Once again, I'm just going to share my screen and I'll show you how crane can help you with all that So yeah, so the first thing we want to do right like this This was manually deployed in my inventory dash source namespace And and now we want to be compliant with our production policies and have this automatically automated and following get ops principles So first thing I will do is to export the current manifest So we'll we'll use a command called crane export to do that And what crane export will do is look at what's currently actually deployed in the inventory source namespace export all the the manifest and then We can review those manifest and use another crane command, which is called crane transform to actually clean this up remove everything that was That is environment specific and can do all kind of things to your to your files So to to actually our abrasive and new technologies and things like that. So let me just Throw a window first show you This new export folder now that has all your converities manifest in there okay And then I'll use the crane transform command Which will look at those and we'll be stripping out like service cluster ip's and metadata information that is specific to this environment So those are all the things that would trip me up when I'm trying to deploy into a new environment Right, those are all specific exactly and that's a common mistake that we see right people are just like deploying something on one Kubernetes cluster But like if you really want to embrace hybrid cloud and be able and or even promote Your your apps from dev to qe to production This needs to be estimated and and you don't want anything to be art coded in there So we want that to be program programmatically done instead of like art coded in your manifest But to do that now we created like a transform folder Which actually is all the things we have detected that should not be art coded in your files And we're creating jason patches out of that so that this can be reviewed and analyzed to make sure You know we're or applying the right changes and when we're ready Then we can use the crane apply command Which will look into all that apply the patches And then create all the files that you want to push to get so that this point This can be fully automated using our go CD to deploy your app on any cluster And and and can be done programmatically instead of manually as you're deploying or promoting your application So let's just run the crane apply command And now if I go to my Get folder I have a resource folder that got created Which is the brand new manifest now fully claimed up and ready to go So the last step I have to do is to push that to my get repository So then our go CD can pick this up and and provision my application automatically for me So if I go back to my main folder here, I have oops I can type I have an our go file now Which is the definition like it's a very simple our go definition that will actually with the With the repository that I want to use to deploy this app and and the namespace where I want this to be Provision from from this point on as soon as our go is actually taking care of that for me then I can make changes And and push that to get and and add this automatically deployed all the time just through our the our go CD automated deployment model But before I push that to before I get our go to do all that This is our inventory service and there's a lot of data as well Right, it's there's a database in there. There's a progress database with all your products. Yeah, we have no billions of products billions So it's good that we're moving a lot We're moving your application and then we can deploy automatically But there's a question question of state and the state is all the pvs Right with all the data of your database that also needs to follow The deployment and we push from one environment to another So we have another train command for that that is called train transfer pvc Would use our sink in the background to actually copy the data And this can be run multiple times. It will just copy the data from the last time I ran this command But you guys have a very Big database with a lot of products, right? So I already ran that for you. So we don't have to wait for all those products or Be updated, right? So we'll save that for today. But yeah, as you can see in the warehouses behind me. They're filled with products. So So so I already ran that for you the database already there. So actually now I'm ready to run just in our go So I'm just gonna run this Cube city upgrade with the our go definition file and this will launch our go to actually provision my app So let's look at at our go and see how actually our go is provisioning all this Just give me one second here So you'll see now that our go actually Look at my brand new manifest that were created with crane that is available on git and if I click on it Then you'll see like the full deployment of my application And attaching this to my tvs With the database content So crane basically has transformed all all my objects And then you push that into git and now our go is just taking over and redeploying it and now I don't have to Yeah, now my my my deployments basically automated. Yeah Yeah, and now you have following your github principles That should be actually the best practices way of deploying apps and promoting apps and and embracing hybrid cloud as you can now Deployed on any cluster and it would just follow along as everything is fully automated So anytime you want to make a change you make it on your git repository and then our go would keep provisioning the latest Changes for you automatically Great. Awesome. Thank you so much margo. It's a great great demo Showing me now now I believe so This is basically the end state we've gotten to so we've seen how we can now have all of our services running on kubernetes Leveraging a git ops paradigm and then simplifying our operations by running on a single kubernetes platform. So this puts Our retail application in a great Place right now we can start to plug in cloud services to this start to bring in a inl all the new cool things That are available when you're running your application natively on kubernetes so Hopefully this demo by the the conveyor group was helpful just to reiterate some of the tools you've seen Across rehosting replatforming and ranked factoring you could see them here in the future We're going to be bringing in Polaris as well, which will actually help us measure those doorometrics So that we could see the improvements we're making software delivery from standpoint over time So if you're interested in joining us on this journey, please join us at conveyor.io This is where the community hangs out We're interested in understanding how you're modernizing your apps and interested in building tools to rehost replatform and refactor your apps To run on kubernetes and use other cloud native technologies You can find us on conveyor slack You can learn about us by joining meetups And if you're interested in proposing a meetup talk, you can email us at conveyor.io at gmail.com So with that I will go ahead and stop the recording and thanks for joining us