 Good morning, good day, good evening to everyone. So as we're starting with this today, we'll see how quickly we can do a Spring Boot deployment at Kubernetes. So let me quickly go and start my presentation. So this is what I am. So I'm just going to say what I do. Yes, there we go. I work as a director of developer experience at Red Hat and an active open source contributor to open base, check, Spring Cloud functions, and create a robotics man and plug-in and contribute to my fabricate platform. So as first said yesterday, I mean, if you need any questions or any please feel to chat them over my social handles, that's there on the right-left-hand side of your screen. OK, with that further delay, let me go ahead and start what we have for today. So everybody knows Cloud-native microservices and that's going to go for the details into this. This is for your read when you get the slide handovers. I'll quickly go and share a short history about microservices. So maybe I could just quickly flip them through because I'm more interested in doing a demo today rather than talking to the slides. Probably would have seen this in an earlier demonstration series. So how do we do microservice and what is the history of it? So we are more interested in the last part of this. Maybe starting for March, I mean, 2012 until June 2014. So this is what my microservice app looks usually. This is for example, like I use caching, a CTP server, and then I use my framework of my choice, Spring Boot, or Wi-Fi's form of Vertex. So I package them very nicely for me and then I take it to the beautiful Spring Boot platform with Eureka for my service registry, config server for my configuration, Netflix ribbon for load balancing, and then Zoom, Strix, Zipkin, and all these nice to sitting uncorrelated on a Spring platform in your platform tree or any other kind of in-platform. That's what's 2014. So it has its own drawbacks. I just want to quickly highlight them very few. Maybe once you saw the tech, you should see me. More details offered. So a couple of things, which I said, managing on multiple hosts. How do we manage both to travel? All these things are talking about more from an automation perspective, and then somebody coming and keying in their scripts and all of the stuff. How do I scale up and down? Let's say I want to increase. I have a Christmas, and then I want to, at the time, I want to increase my application. Let's say I had a huge load, and after that, it has to come back down. So it should be done pretty easily. And then there was something, kind of thing. How do I keep the actual and desired state that's more characteristics of keeping it as a part of the chip monster? And how do I work conflict postlaves? How do I update them to kind of rolling upgrade? And about all of this, how do I use containers? So that's what we're going to see today. So I'm kind of taking you from the old gay man-style kind of stuff, typically like how Flintstones comes to meet Jetson. So I want to turn the Flintstones of Jetson side. So this is our guy who's going to do this, Kibonetus. Everybody, I think most of the people who are on this session should know more, probably would have known more or heard about Kibonetus. The few points which I want to highlight here is very important. Because that is the container orchestrator, and then open source, obviously. And then it's going to manage applications. So all these years before Kibonetus, we have seen that application kind of uses to manage, I mean, manage infrastructure. And Kibonetus is the first of its kind to just come down and kind of managing applications. Quickly, like what we have done is like we, as Radar, we take in Kibonetus and then we get one more upstream project, added some enterprise stuff, and then gave you what is called as OpenShift Origin. And then with OpenShift Origin, you will get three other flavors as well. So I'm not going to go more into detail. Just we know that OpenShift is enterprise Kibonetus. All right, so these are the few microservilities that you usually see. And then we'll see how Kibonetus satisfies much of these 12-factor app-related properties, like monitoring, discovery, invocation, elasticity. And then we had an OpenShift, which gives you logging plus pipeline. So this is quite nicely how Kibonetus and OpenShift fits into the microservice scheme of things, right? And an ideal platform for developing cloud-native Java applications. All right, so coming back to our hero of the today, so how was boot? My boot, Spring Boot application, how does it look in Kibonetus? We all know that Java-jar will make me and my application done. But now today, you'll see that Kibonetus will give you furthermore stuff. Still my app-jar is there, but I'll do a nice packaging and deploying as a container into Kibonetus. With my base container, it's just really a proper container giving me all the stuff that is needed for me today, right? All right, so quickly I want to demo. So I'm just going to show you what I'm going to run today. So I made a pretty simple application. This is one of the first things you can see today. So it's a simple REST controller, which says, where am I? So it means it just prints out the host name from where this application is deployed, right? So let's do quickly. So I'm on this. So what do you usually do is, in the Spring world, I just do maybe a W, Spring Boot, go on, run. I'm just going to show you how it usually runs. So in a usual world, I just kind of say, then I just do a simple curl. Just go localhost, maybe 80 applications running locally. So I get a hello from a localhost. So let's quickly flip and then run deploy. So for running in Kibonetus, as our title said today, so it's going to be pretty easy and awesome. For a Maven user, it's another Maven goal that I need to do. So I'm just going to do Maven clean package, just for cleaning the stuff out so that I can show you that everything is getting built fresh. All right. So application is up and running. So right now, what I'm going to do is, like, I just go ahead and fire up one more Maven goal, Fabricate Maven plugin. I think I have all the links given at the end of my text, which will be shared with you people, so you can find a link of this Fabricate Maven plugin and read more about it, what's has one. So I'm just going to do a deploy. And then application is now getting packaged. And you will see the Docker containers. You'll see the Docker build getting started on the command line. And then it generates a Docker, picks up the right Docker image that's required here. And then deploy an application into OpenShift. So let's quickly go back to my screen. Just let's say, I need to end this presentation so that I can show you the console. And you'll see this application getting deployed. Your bot is getting created here. You'll see the same build logs, which you saw on the command line, which is here right here. So just wait for the application to come up. You can click here and go there and then see the logs. If you want to see the logs and you see the applications up and running. So once the app is run, so let's quickly wait. It's up. So I'm going to do code. I already have this thing copied up for convenience. I'm just going to copy this command. And then OK. So you'll see this HelloBoat coming up from the Kubernetes part. How do we see the part? OCE get parts. You'll see the first one right there, the HelloBoat, which is running right there now. And then I repeat the command again. So we'll see the same stuff coming here. So I'm sorry. Great. So it's the same part. Now the host name turns out to be the host name of the Kubernetes part that has deployed here. So I can also do, naturally, the application gives you load balancing as well. So what I'm going to quickly do is I'm going to go ahead and increase this part to two parts now. Let's wait for some time for the spot while I can make a VFI command. Going to do the same old code. But I'm going to repeat this for 10. So then we go. So we bought our parts up for money. And then I'm going to go back to my console. Let's see if we can get rid of this nasty escaping that happens on my side of the search. OK. There we go. Oh my god. See me check? All right. You see that it's getting flipping between the load balance at 9Z, C0, 9, D1, something like that. So if you go back to my demo and you will see that there are two parts which is running there. So which is going to give you instead of doing a load balancing naturally with the Kubernetes. So ideally, I'm getting nothing extra. It's a pretty simple command. As a Java user, I do still do the same main deploy. And my Springboard application is right now running on OpenShift. So what we do now next is let's go back to my slide. I'm not going to run through this slide. Let's see what's what we have for next. So something we built, so as we saw right now with Kubernetes, we see all these components getting flipped up by Kubernetes. And then like Eureka, Netflix, Reborn, Config Server, and all that stuff. So I just go to skip these slides for you to read further. These are some modules which you can add as well. So this is what we built last year, same stuff. Just replacing it with OpenTraising as standard. So this is something we could plan to build in 2018. It hit the com. So once it shows up, you could see them soon. So I just want to go back and quick another demo. So maybe I can skip this slide. I want to do a demo on discovery and invocation. So people are used to Reborn and how Reborn turns out to a discovery of their servers and all of the stuff. So what I'm going to do is I've developed a simple calculator, a simple calculator app. Just do add, multiply, divide, and define. So what I did is like I took this calculator and then I went to the add a calculator service defined, so which is kind of calls this calculator app. So I have an application in my main list, URL, path, add, and add. So which is going to go and discover my servers and then go and call the server. So for people who are already used to this kind of discovery applications, you'll see my application is not changing any kind. So I just still have a Springboard application with enable discovery clients for doing service discovery. And then define Reborn client. But one of the stuff I'm doing right now is that I'm making sure that my service name is exactly the same service name of my Kubernetes service. All right? So let's quickly go and deploy this. So you will see the command doesn't have any change. It's going to do maybe deploy. It's going to show our console to see the calculator port coming up. When that comes in, I can just bring this down to zero, just to save some memory to see. Let's see what we do. OK. So it's getting deployed. And then you'll see that, yeah, the calculator's getting deployed here. We'll wait for this to come up. Go, go, go, go, go, push enough to numbers. There we go. So we just got the simple calculator services up and running for us. So now I have to go and deploy the discovery. So there's no change to my command. That's going to name, advocate, deploy. So in case if we need to package it, just do package and then do a deploy. So let's wait for this as well to get deployed so that we can fire the discovery call. So as I said earlier, this discovery application is going to discover the simple calculator service, which is a Kubernetes service deployed inside your OpenCheck cluster. And then kind of respond to your back with the addition like with the two random numbers. So what I really am trying to show to the users here is that so for a Springboard developer who's quite used to Netflix was reborn, all kind of discovery from application perspective, there's not going to be any change when you take it and deploy. Maybe we need to kind of alter the additional plug-in, the payment plug-in that is required for us to generate the Kubernetes resources that's required to deploy the Springboard application as a containerized application inside OpenShift. So let's see where it is. Yeah, it's there. So I'm just going to do the same. Now we go. So there is two random numbers generated and then the call is given to this. So if you want to observe what's the call, so we'll see this product itself. I'm just going to do it once more so that I can show you that. I'm calling this client with the consumer of the calculator service, but the call is actually going to the actual service. That's a calculator and getting the response back. So the answer is 1875190413, same thing, what you got here. So let's move on to the last quick demo, which is kind of can be deployments. I want to quickly run through what people who are interested in knowing about this. So quickly run through the official definition of what's can be is all about. And then we'll see how this can be works in a real time deployment. So back again to our quick demo. So I have a Greeter application. So this application has say hello from our rest. You are a part hello. So the first time we're going to deploy is it's going to deploy say hello. The same as our hello boot controller earlier, but we'll see how we deploy it. So I won't rely on Maven profiles here. Let's say Maven deploy. The first one is non-can be just a clean package. That's the package is done. We'll duplicate, deploy. Now the deployment happened. We can also see what's happening on our screen. I'm just going to split it to my. So let's see what here. Application is getting deployed right now. You create a bot. It's successfully deployed. We see wait for the part to come up. So you'll see a Greeter deployment here running. Let's see how it works here. Fantastic. The app is running. So I'm just going to go again. It says click, right? So we have gone hello. But before I kind of move to this, so what I'm going to do right now is like I'm just going to go my code and say hi. So I'm back to my code. And then I just need to do clean. But in this case, I have a profile in my code. All of this can only do first package for this application. Because we already had a build, which has a older code, the non-can't record. Since we have changed the application to a can't record, I'm just kind of doing the rebuild of the application so that the container has the latest change codes, the can't record. And then I go back and do the same profile on hand. I'll just say fabricate deploy. So it just does an image back again. And then while it does this. So the can't be is quite evident when we run multiple calls. So that you will see the application shifting. So I'm just going to do a bit of a spin time to see the application getting changed once a month, to can't be to pre-canned. The one that you observe right now is that we already had can't redeployment than earlier. I'm non-can't read. And then we are now doing greater can't redeployment is done now with the change which we did to the code. So we have two things right now running. So I'm just going to, so the way to find, I'm just going to get the pods, correct. So we see two pods here. The greater pod, greater candy and greater pod. So when I do this, so we'll see this. My application is kind of shifting between non-candy and can't read, non-candy and can't read. So now I feel that my application is doing pretty good. So what I'm going to do is I'm going to increase my can't recount to one. Now we see every two requests going to the can't read and one request coming to my non-can't read. So Kubernetes distribution is not a smart can't read. As we said earlier in a couple of slides, if you want to go for smart can't read, we can check out with STO and all other stuff which does a very smart can't read kind of stuff. Now I'm going to repeat this again. So we'll see after the first request distributed between two, we'll see there's two requests goes to can't read and then one comes to the non-candy and two requests comes to candy and one comes to non-candy, right? So this is how can't read deployment can be done with the springboard based applications with OpenShift and using Fabricate Mem and Plugin, right? The last part that you want to do is like how do we undeploy? So I'm going to take this example of undeploying my Alomod application is already there, but no pods running right now. Fabricate, see it is simple, undeploy. That's it. Let all the Kubernetes OpenShift resources that was created for this particular Java application and you'll also see this pods disappear from here. So, and that's pretty much I have for the day. Oatubar and any of if you want, any kind I can just go ahead and take any other questions. And another question, what if I get the slide as always? Yeah, this, I hear your voice, I'm sorry, Oatubar. The second question was where do I get the slides as always, which you can download from this platform and you can also provide the URL again, can mesh to where they can go find the slides. Sure, again, I can provide the URL right away on the chat. Yes, the summary, the summary slide of the platform has the URL for the slides. I'm just gonna go to the question. And there is another question around Gradle, use of Gradle as opposed to Maven? Yes, I think that's the usual question which we get from any Java developer. So, but unfortunately right now, the Fabricate Maven plugin does not support the Adel projects. So it's always Maven, I'm gonna go for. So Istio right now, we don't have any support of Istio with Fabricate Maven plugin. So still we have to rely on Istio's ETL and all those stuff, maybe that's a question bit out of context, and then it starts a new thread all the way over, but right now there is no support for Fabricate Maven plugin for Istio. Fabricate is not only supported on KITS, but it's also supported on OpenShift. So as you see in this demos, I was running the Fabricate Maven plugin against MiniShift, OpenShift single load cluster. In case if you're having a Kubernetes cluster running for you or something like a MiniCube, then you will see that application Fabricate Maven building doing a build for Kubernetes. New record, Zool, model Zool is responsible for client balancing, Kubernetes, so it changes. There is no concept of client-side load balancing with Kubernetes. The Kubernetes load balancing is done by the Kubernetes cluster. So there's nobody is right now responsible for client-level. Maybe if you are kind of, I think we touched up on in one of our slides that the future of this architecture is going to be that we will be getting an Istio here. So once we get Istio here, then you'll see some kind of balancing which is being taken care by the service mesh and not relying completely on Kubernetes. Yeah, our last session was on Istio specifically and we also published the Istio tutorial. So if you guys are interested in Istio, we have a great huge tutorial of walking through all the aspects of Istio, including using Spring Boot and there's also a Vertex endpoint in that set of examples. So for the sake of the audience, I'm just posting the URL of the demo projects to the chat so that people can grab it and then we'll go to see the project. And you should soon find these slides, copy of the slides as well here. So then I can download the slides from here as well. And it's time to wrap up. So as always, thank you so much, Kamesh. And we had several hundred people on the line with us today. Hopefully you guys found it to be interesting. And if you have more questions, feel free to hit us up on Twitter. You guys mostly have my email address based on the announcements for these various events. Feel free to just email me back and we will try to get back to you as soon as we can. Kamesh, thank you so much. Thanks everyone, thanks Laura.