 Hello and welcome to Dev Nation Live. Hopefully you guys are excited to be here for another live session. If you caught our last session, we had a little technical difficulty. We think we sorted that out and we'll be at a broadcast live for the next 30 minutes giving you lots of great content. We're gonna be here the first and third Thursday of every month. We're gonna keep doing this. We have tons of things lined up as you've seen from the webpage. And today we have a very special guest star, Rafael Benavides, who's gonna give us a really awesome deep dive into Kubernetes. A couple of things to say about Rafael though. He's a director of developer experience here at Red Hat. He's one of our lead senior developer advocates who travels the globe, often doing Kubernetes workshops and getting hands-on with Java developers showing them how to use Kubernetes for the very first time. And today he's gonna give us the nice introduction for how Kubernetes works. And for those of you who have questions, please use the chat window and I'll be verbally giving those over to Rafael or trying to chat back with you and answer them live in real time. But the chat window is your way for communicating back with us. And also keep in mind, this broadcast will go live to YouTube immediately after we hit the close and finish button. So it'll be available to you instantaneously. So that's nice also. So again, if you have questions, hit us in the chat and Rafael, please take it away. Hello everyone, it's a great pleasure to be here presenting to you this introduction of Kubernetes focused on Java developers. So let me start sharing my screen so we can do a journey together through the Kubernetes specifically for Java developers. Well, one of the first things that we need to understand because most people ask, well, Java isn't supposed to be already a multi-platform language. And we need to understand why we want to run Java applications inside containers. And of course, containers have a lot of advantage. It's lightweight. And because it's lightweight, it simplified DevOps practices. It's empowered microservice architectures. And that's why we want to show you how to perform those things using containers and using Kubernetes. Of course, one way to run a Linux container is for example, when you go to your terminal and type Docker run dash D to create a detached process and the name of image. That will create a single and isolated Linux process running in a single machine. But let's go crazy here. Let's think that you have not six, but you have 100 machines on your data center or that data center or on your cloud and you decide to run a thousand containers. And in that case, what are the difficulties? What are your challenges? You probably will face challenges to scale those containers to avoid port conflicts. What we will do to update them, to know where they are running and to answer all those challenges, we have the Kubernetes project. The idea of the Kubernetes project is to be an orchestrator for containers. And it's not focused to manage machines, but focused on managing applications. It's a very popular project. We can go to the GitHub project of Kubernetes and we will see that's very popular with more than 25,000 stars, a lot of commits and contributors. We can see here at the post that the project, since the last week and it keeps moving forward and have a lot of participation. And among this participation, we can mention that Red Hat is the second top contributor after Google itself. It's a project started by Google based on its expertise using containers. Well, Kubernetes has the following architecture. Everything runs in its, every container run on its individual nodes, okay? And inside the Kubernetes cluster, you have a software-defined network that allows, for example, that wildfly application running here can communicate, for example, with the Postgres running in another node. And of course, there is also a service layer that I will talk more about that. And the Kubernetes cluster can run on physical machines, virtual machines, private or public clouds. We all know that containers are ephemeral. For example, suppose that we have data stored on a Postgres database. What will happen if that container dies? So Kubernetes also supports a persistent storage. And to manage this cluster here and all these containers, we can have one or more masters to take care of that, okay? And to allow access to those containers, we also have a routing layer. To understand Kubernetes concepts, we need to understand at least those four concepts. We need to understand what are pods, what are deployments, what are services, and what are labels. Today, we are going to focus on pods, deployments, and services. Given our time, we will not cover labels, but that's a more simple concept. And I believe that we will not have any impact covering labels at this moment. Well, let's start by pod. Pod, like the name says, is a collection of whales. So in the world of containers, pod is a collection of containers because there are certain applications that all containers walk together on the cluster. For example, you have an application, you have an administrative console, you have a log collector, and all those containers need to walk together. So we will have a pod with three containers. Every pod can share, they all share the same IP address, the same volume, the same environment variables, okay? So let's start here doing a live demo and playing with Kubernetes. To manage a Kubernetes cluster, we have the command kubectl. So I have already deployed here a pod running MySQL. Okay, so if I do a kubectl get pod, we can see that I have a pod MySQL running. It's ready, no restart, it was created two hours ago. And we can even explore more details about this pod running kubectl, describe pod, and the name of the pod. So that will show us that this pod here has the following IP, 172, 1703. It has one container only, most of the time each pod will have only one container, but remember, you can have more than one. This container, it's running this image of MySQL. You can have limits for CPU, you can have environment variables, you can have mount points for volumes, like in this case, I have a percent volume, where the data of this MySQL is stored outside of the container. But let's continue. Another thing that we need to understand is that it's something called deployment. The deployment have a replication controller that for example, it says, in this case of this cluster, that I asked a Kubernetes to keep foreign replicas of a Tomcat container running. But what happens if this container fails for any reason, if someone powers off the container, if the node dies for any reason, it runs out of resources, Kubernetes through the usage of a replication controller slash deployment will detect that that node is not available anymore, and it will move that container to another node. So let's see that happening here on the Qubectl. If I use Qubectl, get deployments, I can see that I have a desired number of MySQL, the number, one replica as a desired number of replicas of MySQL. But suppose that I delete the Get Pods, Qubectl, delete this pod, now that this pod is deleted, if I do a Get Pods, I will have another replica running. It's being created, it will be ready in a couple of seconds. That's because the deployment has me to have always one replica running. But let's do another thing. Let's analyze this pod that has been created. So let's describe this pod here that was just created. And we will see that this pod has now another IP address, 172, 1704, the previous one was 003. So every time that the pod dies, it can reborn or restart with a new IP address in another node. So that brings us to another concept of Kubernetes. This concept, it's called services. Service is the kind of load balancer that will locate every pod based on its label and send all requests to that pod. The service has internal cluster IP that will not change once it's created. So let's explore that. Let's do a Qtl get services. We have now a service called MySQL with this IP address, this cluster IP address that will not change. So let's explore that with describe service MySQL. We can see that it will forward every request that arrived in this cluster IP. It will be sent to that pod 172, 1704 because it uses this label app equals MySQL as a selector. So if we do get pods asking for the label app equals MySQL, we'll see that this is the only pod that will receive the request, okay? But remember that I told you how to run a Docker container. Let's see how it is easy to run a Kubernetes pod. Let's start from a Docker. If we want to, by the way, let me show you something here. We will create a microservice application, okay? That uses this MySQL. We'll deploy two applications here, one called guest book. It's written in Java implemented using world fly swarm. And another one, it's another microservice called Hello World. It's implemented using Vertex, which were presented last time on definition. And there will be a front end written in Node.js that will use both microservices. So what I will do now is create a guest book application using Kubernetes. Of course, I already have an image of that, but suppose that I would run Docker. The command would be Docker run. I could specify the name, calling it as guest book service. And the name of the image, right? The image would be rafavene slash microservices guest book. And we can specify the name of the tag. That's how we run Docker containers, right? To run Kubernetes, it's simple as that. We will replace the Docker command by kubectl. The name is already the second parameter. And here we will specify the image. See, it's still almost the same. The difference is that I can also specify labels. I can specify a label, for example, this is my application called guest book service. This environment is the production. I'm running the version 1.0. And I can specify also the number of replicas. I want to run two replicas at this moment. So when I do that, it created a deployment called guest book service. Let's get the deployments here. Get deployment. It says that I desire to have two replicas running. I can do a kubectl, get pods. I have two replicas of the guest book service running. And let me also show you something that's very interesting. On top of Kubernetes, I have OpenShift running. So I'm using here Minishift, which is part of CDK container development kit that you can get in the dev suite. I believe that someone will share the link to you and at this moment at the chat box. So we run Minishift console to open the OpenShift dashboard. I have here the project definition. And we can see the pods and the deployments that I've created now. So I have here in my SQL, I have here the guest book with two replicas based on that image that I created. But until this moment, I don't have any service. So how can I expose that deployment as a service? Again, I can use the kubectl command run expose the deployment guest book service. And I can specify the port 8080, okay? That will create a service. Let's refresh the screen. And now I have a service that sends the request, balance the request to these two pods, okay? We can also do that using the console. Now, you can see here that those two endpoints will receive the request that I do in this microservice. Of course, I can create pods and deployments and services using command line, but that would require more commands if I want to specify environment variables and volumes and et cetera. So that's why it's a good practice to define those resources inside EMO file. So for example, I have here for the next microservice using Vertex, IEMO file that specifies the number of replicas. It's specified the template of my pods, what are the containers, the image, the ports, I can specify volumes, environment variables. So it is much more simple if I do a kubectl create dash f and specify every file here. I can specify the deployment, I can specify the service, I can do the same for the frontend deployment and frontend service. A single command will create everything and I have already here running all my, I have it here running my complete application. Okay, so in a couple of seconds, we will have the application running. Something that I can do, that's a feature of OpenShift is that I can create a route and that will allow me to access the this application externally on the part 80. So given that I want to access the frontend UI, I will click here on the create route that we will ask for a hostname. I can use, for example, frontend, the IP address and use the nip.io that's a service that every IP that I use here results to that same IP. I specify the destination service, the port, I'm using the port 80. Once that I reach, that I hit create button that will create a route and I can open this route here, clicking on the link and I can, oops, there's a small viewer over here. Let's see why. Probably it's because my application is still loading. Another feature, a good feature about the OpenShift console is that I can, for example, as I can do here, QCTL get pods and I can do a logs and get a container, for example. I can also do open here and see the logs using the dashboard. So let's refresh the application. Yeah, now the application is working. What's our name? Rafael, welcome to Dev Nation Live. Okay, the message is there, it's persisted. I can even come here, delete my pods, my SQL. Okay, we will see that Kubernetes will take care to replace that in my SQL instance and we can even refresh our application and we will see that the data is still there. But now let's make things even more easy. I will create now from this scratch a new microservice. Let's start playing with OpenShift because I want that microservice to be in another project. So I can come here in the console, instead of QCTL, I will use the command OC. OC, the OpenShift client, it's essentially a superset of QCTL because we can even do a QCTLC get pods. We will have the same result. We can do OC get service, get deployments. Okay, but we can also do OC new project, micro profile. I will create a micro profile application. Once that I do that, I can see that the project here that I just created, micro profile. And now let's go to the wall fly swarm web page. In the wall fly swarm web page, we have the generator where we can specify, we can create projects and I will create a micro profile application. I can just hit generate project and that will give me, let me place here at the desktop, it will give me a file, a zip file that I can go to the desktop and zip that project that has been created. You can see that's very simple. It's a PoemXML and a Java endpoint. If we can take a look in the PoemXML, this is really, really simple. We have just the dependency. We have a plugin of the wall fly swarm that will convert the work file in a FET jar, in a Uber jar and what I can do now to run this FET jar inside Kubernetes is to run a Maven plugin with the goal set up. With the goal set up. This goal here will modify the PoemXML to insert the fabricate Maven plugin which allows me to have, let me show you, to have the deployment inside OpenShift as part of the build. So we can see here now, we have the Maven plugin as part of the PoemXML. Now it's easy. I can do just Maven fabricate deploy. That will call both plugins. It will call the fabricate Maven plugin. It will also call the wall fly Maven plugin. We can see here that's running in OpenShift mode. So it detected that I have an OpenShift instance running. They, we have the work file created. Now it's time for the wall fly form plugin to convert that work file in a jar file. Once that the jar file is created, it will upload the resources here to a build and that source will be converted in an image. So while this process is happening, let me ask you if you have any questions so far. Any question for from the chat box? There is one question and it's related to the rebirthing, if you will, responding of MySQL in particular, being that you're demonstrating MySQL here. What is your perception on a database connection pool and what it would need to do to reconnect back up to, let's say a recycled MySQL pod? Oh, that's a good question because as you could see, I destroyed MySQL instance and my application was able to detect again because in that case, let me show you the sources. As I say, talk is cheap, show me the source. The guest book service is implemented using wall fly swarm and you can see that I specified data source with the background validation, valid connection checker, valid data on match. So if the connection is not valid, the connection pool will be dropped and it will try to connect again, okay? Oh, we can see now that it's got the binaries and it will convert the source code to an image. There another question? Not in a small bit, but I did have one question. Do you have any specific advice on what types of Java applications do fit nicely in a Docker container, Kubernetes OpenShift world? Well, I would say that every Java application is a good fit for a container world. Spring Boot, wall fly swarm or even a reactive application like Vertex. So for example, the other service that I deployed was a Vertex application. By the way, we can see here that the source code, the image has been generated. Once that it's ready, it will push the resulting image to an internal Docker, internal image string which is an internal registry. And once that the image is available, an application will be created for you with a replica and also are already with a route. So let's take a look at the logs. We can do that again using the Qubectl logs. For example, Qubectl get pods, Qubectl logs and the name of the pod. We can even do a Qubectl log-f to follow the logs. It's exactly the same response. Okay, or even using OC instead of Qubectl. It's the same result as I said, Qubectl OC and it's a superset of Qubectl. Now that the application is loading, let's open the endpoint and as for the endpoint, hello. It's still deploying. I don't have the application running yet. Yeah, here, hello from wildfly swarm. So you could see that with a few clicks, I just, I generated a project. I run Maven setup and Maven deploy and my application is already deployed inside a Kubernetes slash OpenShift cluster. So that concludes our presentation of Kubernetes for Java developers. If you have any question, you can do it now, of course, but also I suggest all of you to follow me at the Twitter handle at Rafa Benny because we are always talking, I'm always talking and giving tips about Kubernetes containers, microservices, and things related to Java development. We also have some interesting recordings related to Kubernetes and how to get started with different Java workloads. There's one out there specifically for Spring Boot on Kubernetes, a session we reported in DevOps UK not too long ago. And Rafael actually does a lot of sessions, a very strenuous, if you will, Kubernetes workshop that he's been doing. He's got several of those scheduled coming up throughout the fall. So you can look for him, follow him on Twitter, and then maybe you can get a chance to show up on those workshops. So we'll be doing more and more of this kind of content as we go forward. Does look like there's one more question here. What is the best open source project to start with? That's an interesting question. What is the best open source project to start with? And it's a trick one because we have several open source projects for the most different purpose. I even saw an open source project for home automation, for a mid or we have plan out. So it all depends on kind of project that you're looking for. Yeah, there's so many amazing open source projects. Here's another great question. What is the difference between Docker Swarm and Kubernetes? And I know that's something that you spent some time researching. Yeah, well, the idea is more or less the same. It's to orchestrate containers, but I prefer to say that Kubernetes has a lot of attraction given that it uses Google's experience on containers. And the Google experience with containers and there's come from 2004. I know that it doesn't sound like so much time from a human point of view, from a human perspective, but from the technology perspective, it's a tremendous amount of time. And even because of these Kubernetes concepts, I know that Docker started a new project called SwarmKit which copies the idea of Kubernetes because the Kubernetes concept, like I said, the pods, the concept of services and present volumes make things much more flexible and suitable for all kinds of application. We have other question. Yeah, there was another question I was trying to answer it actually via chat and it's an interesting one though we are out of time. So, but it is a fun one. What do you consider to be, as far as memory cost associated with things like wildfire swarm? An application server is considered to be very large. What are your thoughts around the memory footprint and the overall startup cost and things like that, associated with wildfire swarm? Well, given the amount of investment on application servers, on JVM runtime, it's getting smaller every time, every single release, it's getting faster and more performatic and requires less footprint. Let me see if I can have an idea here of how much memory I'm consuming. It's actually hard to measure the actual memory consumption within a Linux architecture just because of the shared memory. Architecture just because of the shared pages and things like that. Yeah. Well, I have not enabled the metrics on my cluster, but usually I've been able to run a good load with 500 megs. And I was just testing literally right before we started this live event, I was just testing Vertex within about 268 megabytes consistently, so making sure that that's working well. But you will see it go down as low as, some people would say lower than that, but I've not found that to be greatly successful up to half a gig, depending on the workload. But Vertex being the smallest, Spring Boot being the next smallest, Wildfire Swarm being the largest of that three. I've not tested Drop Wizard and I've not tested some of the other JVM-based runtimes because those are the three we focus on here at Red Hat. But we are definitely out of time today. I thank you guys all so much for your participation, your chat, do check out the previous recording, aside from that small five minute window we lost within the recording last time. And hopefully you guys got good information from Rafael O'Dell, Rafael, thank you so much, fantastic presentation, I love the fact that how fast you blew through that. We will offer deeper dives and more content like this in the future. Stay tuned to Demnation Live on our webpage because you're gonna see a lot of other really awesome stuff scheduled up and coming, including a really awesome session on Istio, which if you're very interested in this kind of architecture, modern microservices architecture, it is a game changer and it's gonna fundamentally help you rethink the way you build applications, it's gonna be totally awesome. So if you guys need anything, reach out to us on Twitter, find us via email, we're available across the internet in numerous ways. But thank you so much, thank you Rafael. Thank you, thank you everyone who watched this live and for those who watched this on a recorded session. Thank you.