 So now it's time to invite our next guest. Pulling Ray in, Ray, if you're hearing this, I hope you're ready because you're about to enter the stage. So we only have friends here on the DevNation track. We had Adam Beane, we had Simon Maipo. You had the opportunity to learn his amazing personality. And of course, we have another friend coming up to stage. Have Ray sang from Google. We bump into each other in a lot of different airports and conferences. And I'm very happy to see you here at DevNation Day, Ray. Yeah, glad to be here. And thanks for having me. It's always our pleasure. And I'm sure that Ray will be able to share a lot of great content because he has some unique capabilities, like he's a Java champion. He's a Java, of course, Java expert. He's a container expert. He's a Kubernetes expert and has a lot to share with us. Ray, without any further delay, the stage is yours. OK, let me go ahead and share my screen. And let's go. Wow, life in 30 minutes is going to be hard. Hey, Simon, what's up? So yeah, thanks for being here. My name is Ray. I'm a developer advocate for Google Cloud. And I love to bring some of our best technologies, both open source and for Google Cloud, to developers all over the world, especially for Java developers. And if you have any feedback, any comments, how we can help you to run your Java applications better, please let me know. You can find me on Twitter at Salonism. If you have any other things you want to learn more about, you can go to my site, which is Salonism.me. So today, because we only have about 30 minutes, I'm going to go thoroughly quickly over some of the key points on how you can get started with Kubernetes with your existing Java application and also apply some of the best practices. So the first thing you need, of course, is a Kubernetes instance. You can, of course, use a remote Kubernetes cluster. Especially if you're working in a larger organization, you may already have Kubernetes clusters that's ready to be used. Or if you're using Cloud, you can create a new Kubernetes cluster on Google Cloud, for example, very, very easily, which is one click. However, as developers, we oftentimes want to do this locally on our own machine without having a dependence on a remote environment. In this case, you probably want to install some local Kubernetes environment. And there are many, many different choices. And if you are running on the Linux machine, I think your choices are much, much better. Because Docker and Kubernetes, they are all based on the Linux operating system. So if you're running Linux, you can just install them directly onto your operating system without having to run yet another virtual machine. And one of the best tools I've seen so far for a Kubernetes environment on Linux is K3S. It's really, really easy to use. I personally have used it in my other environments, which is Linux-based. And you can just very easily get started on the Kubernetes environment in less than a minute. I actually tried this out, which is really nice. If you're running Docker already, you can also use something called Kind. And Kind stands for Kubernetes in Docker so that you can just start out a Kubernetes environment directly in your existing Docker environment. And that's also pretty useful, especially if you're running some kind of testing that you need an infimilar Kubernetes environment to test against a Kubernetes environment that's not a real costor. You can run Kind instead of your Docker daemon and do all the tests you need against real Kubernetes APIs. On the Mac or Windows machine, your choices are now more limited. You either have to use Docker Desktop or Minikube. And both of them will be running Kubernetes instead of a VM. For this demo, for this talk, I'm actually using Minikube. So you can install Minikube with a command line from you can get it from the website. Or if you're using G-Cloud's command line, which is something that we have at Google, that you can install on your machine too, you can just say G-Cloud components installed and then Minikube. And this will install Minikube onto your machine. Once you have it up and running, you can still say Minikube start. And it will start a new VM for you. And everything should be up and running. And I should be able to say, kubectl get pot. And now I'm actually connected to my Minikube environment that's running locally on my machine. I can just double-check here. I can say kubectl costor info. And this will definitely tell you you can double-check this. This is connected to my internal IP. The local IP address is here. And that is definitely the Minikube IP address. So first thing is try to get Kubernetes up and running on your local desktop. And then if you have an existing application, you can containerize your app. And other friend, Matt Rabo, who works at another company called Octa, he always say, friends don't let friends write authentication. Because authentication, you have to add to every single app. It's repetitive. It's mundane. And it's also error-prone. So most of the stuff that you find online when you're learning how to use containers and creating containers, usually they tell you to write a Docker file. And one thing I like to say is that friends also don't let friends write Docker files. Why? Because Docker files are also repetitive, in mundane, and potentially error-prone too. It actually takes a lot of experience and practices to actually write a Docker file that's optimized and suitable for your specific application. So if you don't write Docker file, what can you do instead? Well, there are actually many, many other tools that's out there today that can help you containerize without actually having any Docker file at all. So I'm going to switch over to my little documentation site that I wrote with everything I know about running applications in Kubernetes. In this case, for a Java application, you can containerize with different tools. One of my favorite tool is CodeJib. And this will allow you to add a plugin into your existing Java app, and you can just run the build and then use Jib to produce the best container image that's most optimized. Another thing that you can explore, if you ever use something like Heroku or Cloud Foundry, they have a concept called BuildPacks. And BuildPacks now are evolving into cloud-native BuildPacks. So you can also use BuildPacks to consistently recreate your container images through best practices without you having to write the Docker file. Now there are many different BuildPacks you can potentially use. For example, there's a Paquito BuildPack that you can use that will build your application in one way. But if you're running and deploying onto Google Cloud into our environments, you can use the GCP BuildPack, which is the same BuildPack that we use for our past environments. And you can use the same BuildPack to build your container images instead. So for my demo here, I'm just going to show you how to use Jib to containerize. So here I'm going to add Jib here. And that's it. Just add a Jib plugin and give it a container image, and then you are done. And now you can go back to the console and just say name and compile, maybe skip the test for now. So I'm going to say skip the test. And give Build. And what this will do is that it will automatically analyze your build file and produce the container layers in the best way possible with the best practices and then pushing everything directly to the registry. And because of how this is layered, it will push the minimum amount of layers as possible. So that's why the build is really, really fast. If you want to use the BuildPack, you don't have to write any Dockerfile. You can just say pack build. And then you can get a similar result. Now, once you have the containers, then you need to deploy into Kubernetes. So number three, EMO files. This is where people will then start to talk about EMOs and how complicated they are, where they are complicated. And you do have to write it in Kubernetes. It is the best practice for you to document and write down that, creatively, how your application is to be deployed so that you can consistently reproduce the same environment over and over again. You have to store that information somewhere in some type of format. And a EMO file is the choice today. Now, however, what I'm also going to say is that friends don't let friends write EMO files. So if you don't have to write EMO files, don't write it. In fact, to get everything started, to get bootstrapped, you probably don't have to write the EMO file. And there are really, really simple ways to do it. So somebody said that they cannot see the bottom of the screen, so I'm going to go to the top. So to create a EMO file, to bootstrap it, for example, with nothing, there are two ways to do it. One way is to use the cube cutout coming in line, so I can go ahead and create a new directory called KS. And usually what happens is that you can grab the name of the container image. And then you can say, cube CTO created a deployment. And I'm going to call it Hello World and give you the name of the image. So this is the image that I have produced through Jib, for example. And then rather than hitting Enter right now, which will just deploy this application into Kubernetes, I'm going to say dash dash dry wrong. Dry wrong. And oh yeah, so this will help me produce a EMO file with just a basic configuration. And then I can configure this later. So I'm going to output this into my KS directory. KS deployment.EMO. And similarly, we can do the same thing for a service as well. And now I have this bootstrap to different EMO files that we can regularly deploy. To deploy this application, I need to also make sure that my container image is built into my local Docker daemon that is used by Minikube, okay? Because what I have done earlier with Jib, with the Jib build command line, this actually builds the container image into a remote registry. And this is built into my Google Container Registry, for example. What that also means is that every single time I'm trying to use this application, it has to pull it down from the registry. And there's no need to do that if you are running Minikube together with the same Docker daemon. So instead of doing this, what I want to do is to make sure, first of all, my Minikube Docker environment is the one I'm using. So I need to use Docker dash in to, for example, configure my environment to use the same Docker daemon. And then rather than using Jib build, I can use Jib Docker build. And what this will do is to build my container image directly into the Docker daemon instead of pushing it to the registry. Okay, so that's one simple way of doing this. And now I can hopefully go ahead and keep CTO apply dash F in my KS directory and that will deploy my application and that's really it. If I want to get rid of it, I can just go ahead and do a delete F and get rid of my application from my Kubernetes environment. Now there may be a different way that you can also create the demo file instead of using the Kube Kotl command line to boost traffic. There's another really neat utility project called Decorate. Okay, and Decorate is an innovation processor that will be able to analyze, for example, your application and automatically infer some default values for the demo creation. So for example, in this case, I am using a spring boot actuator. So it comes with a health check automatically. So using Decorate, for example, if I were to rebuild this application right now, let me do a clean and package, skip test, test, I think, and do a package. And this will actually use an innovation processor to create the demo file for me into the target directory. Let me just go ahead and find the demo file. And, oh, where did it go? Did it do? Clean, skip test and package. Well, it should actually create the, if the innovation processor worked and ran, it should have. It should actually put everything into meta-info. Yeah, there it is, yeah. And then here's the Kubernetes demo, okay? And you can also copy this file out and it has everything bootstrap for you. Now what is the important part here is that you do actually want to have the liveness probe and the readiness probe, right? Again, a lot of the tutorials out there, they will say, hey, just deploy your app without any of these configuration. That is great to get started, but if you want to deploy this into a near-production environment, as you progress through the different stages, you will definitely want to add these things. So what does the liveness probe do? A liveness probe is a way for Kubernetes to check whether your application is alive. And if this endpoint does not respond, it will try to restart your application. So you should definitely, usually, always have a liveness probe configured along with your demo file. So I'm gonna copy this configuration out into mine just because this is really, really useful, okay? So you can use the decorate-generated file as is, or for me, I just like to copy things out into my own file so I know exactly what I have configured. The other configuration that's also very important is the readiness probe. And this is where Kubernetes can tell whether your application is ready to serve. Why? Because sometimes your application is alive, but it is not ready to serve. Why? Because sometimes you might be pre-warming the cache, you might be loading the data, or that some of the connections hasn't been fully established yet. So you want to wait until you start to serve. And Kubernetes will check against this particular endpoint to see whether your application is ready to serve. And I will also copy this part out and put it here as well. So I'm zooming a little bit to see all the changes that you have done. And then finally, another really important bit here before you consider running the application in a higher level, you also want to make sure that you have configured the amount of CPU and memory that your application can use. So for example, mostly if you don't configure anything, then your JVM process is going to think it can use everything on that machine. But because Kubernetes is going to try to impact the different numbers of your JVM applications into the same nodes, into the same machine, if they all think they have everything they can use, they're going to be competing for resources. So setting up some kind of resource constraint is really, really important in this context. So you can use the demo file and configure the resources block. And there are two different blocks you can configure, one is request, and this is basically how much do you need at the minimum. At the minimum, this is what the machine should have that can allow you to run your app. And the minimum of the CPU, we can decide. For example, here I'm going to use half of the CPU as a minimum. So to write one half, you can say 500 mLis. That small m means 1000. So 500 divided by 1000 is 50%, and that's one half. And similarly, you can set this memory here. For example, I can set 2056 megabyte. Now notice very importantly, the big M and the small m. Small m means one 1000s, and the big M means 1000. If you accidentally put small m here, you're going to get less than one byte potentially, because that's 0.256 byte, and that's not going to work. So make sure you put the big M. Similarly, this is the minimum, and you should always set the maximum, and that will be the limit. So here's the limit. Maybe at max, I'm going to allow the use of two CPUs, and I do need to set the limit for the maximum amount of the memory. Otherwise, you might see the whole memory on the machine, and you're going to run out of memory really, really quickly. So for this case, I'm going to say, maybe 500 megabytes is the top. Before I move on, another really important part of this is if you are using the GDK versions of GDK version hand and above, it will actually be able to see this memory limit, and it will automatically configure the heap that's less than this, based on the ratio. So if you're using GDK 11, GDK 13, 15, that just came out, like no issues there, that's fine. If you're using GDK 8, then you have to be careful, because only 8u192 and above is able to see this particular configuration. If you use something that's lower than this version, for example, if I go back to my max heap example down here, so what I'm going to do is to run a Docker instance with 256 megabytes of RAM using an older version of the GDK. And if I want to run this and see the max heap, what you're going to see is the max heap is way bigger than 256 megabytes, right? If I want to do a pro as a calculator, I don't know if anyone use pros anymore, but I use it as a calculator every day, I can do that's byte, megabyte, and sorry, and that's gigabyte. So this is going to tell me that I'm actually using this much memory as heap, and that's way exceeding my 256 megabyte, and the JVM is going to try to use this all of the heap all the time, so you're going to exceed the memory limit, and this is where you get to see all unkilled arrows from your Docker instances, right? So just make sure you're using the right version of the GDK, otherwise you're going to be running out of memory and getting all unkilled very often, okay? So now we have all of these container images, and we have the YAML file. One last thing I do need to remember to do is to just make sure my image pull policy is if not present. So what does this do? Well, by default, this container image will be pulled down all the time from the registry, but this image does not exist on my registry because I push it to my local Docker daemon that's running in the same Kubernetes environment, and so it's actually not there. So when you try to deploy the file as is, it's going to have an arrow saying that it cannot find this Docker image, but it's already present on that machine in the same Docker environment that MiniCube is running so you just want to set the pull policy to if not present, and this will make sure that it doesn't try to repool from something that does not exist or just reuse what's already there, okay? And with all of this configuration in, hopefully we can go ahead and deploy this. So I'm going to go do a clear, I'm going to apply everything, redeploy my app and get my pot, and I can see that this is almost up and running, and hopefully this will actually run. Oh, live demos. And then once this is up and running, we can also take a look at my service. So this is a low balancer that we created, and it's not so easy to connect to it because it's using a different IP range that my machine cannot actually connect to it, okay? So for example, for me to connect to this particular application, what I need to do is to set up a pull forward. So I can use keep CTO coming in line and create a pull forward into my service, my low balancer that's running in my cluster, and to the hello world service, and I'm going to bind to the pull 8080. Let me just make sure there's no other things listening on the same port. There we go. And try that again, nope. So now I'm bound my local host pull 8080 into the low balancer that's in Kubernetes, which then will drop my request to the actual back end. If everything works properly, I should be able to say local host 8080, and yep, I have a message here, which is great, okay? So now I can see the entire thing is connected up. All my app is running in Kubernetes right now, and I can send a request through it. Now, if you want to try to make a change, well, things get complicated because you have to go ahead and rebuild your container image. We have to redeploy this YAML file. You have to re-trigger the restart of the application and so on and so forth. And again, this process can be mundane, repetitive, and error-prone. And so to make this a little bit easier, I'm just gonna remove that for now. To make this a little bit easier, we have a plugin called Cloud Code. So with the Cloud Code plugin, that can work in IntelliJ and also VS Code. What we can do then is to allow you to add Kubernetes support. So if I zoom in here a little bit, you can see in the Cloud Code plugin in the Kubernetes, we can add Kubernetes support, okay? And what this will do is to go into your project and find these YAML files, and then you can pick the container image that you want to add to Cloud Code so that you can do iterative deployment. And then you can pick and choose how you wanna build this application, okay? So in my case, because I'm using Jib, I'm gonna use Jib here as the builder, when I say initialize. What this will do is to create what's called a scaffold YAML file. This is the configuration. And this will then behind the scenes allow us to use a tool called scaffold to do all the things that we do during development, but automatically, okay? So this is what we call a development loop. What does that mean? That means that now I can, from the IDE, go ahead and start this application, okay? And this will use scaffold behind the scenes, which is defined in this YAML file. Which application am I going to build? What is the container image that I need to create, okay? And then which Kubernetes manifest do I need to deploy? And as soon as I click on run, what's happening behind the scenes is I first build my application, and then use Jib to build my container image. And once the container image is built, it then deploys my YAML file into my Kubernetes cluster, in this case, Minikube. And it's going to wait for everything to start. And when everything is up and running, it will set up the pull forward for me and everything then is ready to go. Again, it's pretty simple and straightforward to use. And there you go. So here's the application running, and it also set up the pull forward to me so that I do not have to do this manually, okay? So if I want to now, again, test out my application, I can say curl, localhost, 8080, right? Nothing much different, right? Thanks for being here. However, if I want to change my application now using cloud code and behind the scenes using scaffold, it will actually automatically detect the change that I'm going to make. So for example, I'm going to say hello, definition, save. It will automatically detect this change, and then it will automatically then reboot my application. There you go. So here's the reboot. You'll reboot the container image for me, and then restarting the application and wait for it to start, set up the pull forward so I can test this application again, okay? Now, rather than doing this because it's still taking some time to run, which took a few seconds, right? Here we can go back to 8080. Let's go ahead and take a look. Here's the curl, and we can see the application has been updated. But rather than doing this, sometimes you might just want to debug your application. But if your application is running in the Kubernetes cluster, how do you actually debug it? What that entails is that you have to configure the Yama file to open up the debugger port and then attaching the IDE to the debugger port. But we also made this a little bit easier. So using call code, for example, rather than hitting on the run button, you can also hit on the click on the debug button. And what this will do is to do exactly the same thing which I just showed you, but also automatically configure the debugger agents and then expose the debugger port and then configure my IDE to connect to the debugger. So once this is done and up and running, I can now, for example, debug my app. So I can add a breakpoint here. Let me go back here and do another trigger to my application. And because we triggered the breakpoint, and now we are seeing the state of the app, and I can, of course, step through the app if I need to. And I can continue and resume, and then my request would then go through. And because it's using a debugger, it can also do hot swap. So for example, my debugger is still connected, in which case I can say hello, Dev Nation. Thanks for being here. And if I save this file and just recompile this class, it can replace this class through the debugger for me. There we go. And you replace the hot reloaded two classes, for example. If I want to go back and trigger the same app again, the same endpoint, now you can see the code being updated through the hot swap that's just built in into the JVM ecosystem. Okay, so hopefully that was all really, really cool. The last thing that you may need to do before you go into production is to potentially change out the number of instances that you want to deploy, or maybe change out the environment with the variable and change out the configuration. Now, what some people might do is to copy and paste the same file over and over again to different directories. And so the file becomes a duplicate, very easy to duplicate the same files over and over again. But the more duplication you have, the more you have to maintain. So written in copying and pasting the file over and over again, we have a final tool here that I want to show, which is called Customize. So with Customize, what you can do is to copy half a set of a base file, base set of files that you don't want to copy over and over again. So we're going to copy it once, okay? So I'm going to copy the file once as a base set and going to create a new file called the customization file, customization.yaml. And the kind of this, oh, kind, one second, kind is called a customization. I'm going to zoom in here a little bit. And what is in this customization? Oh, basically we have the two files, deployment and services. To customize something here, if I want to create a new configuration just for a QA environment, for example, or for a production environment, all I need to do is to create a new directory. And in this directory, rather than copying everything, I'm just going to also create a customization file. But then rather than referencing each individual file over and over again, I can just reference the directory. So this is basically inheriting all the same files from the base directory. So you don't have to do the copy yourself, okay? And then you can actually apply some specific customizations. So for example, in this case, maybe I want to change the number of instances from one to two. I'm going to zoom in here a little bit. So you can, rather than copying the whole file, you can just create a new one, which is called scale.yaml, for example. And in this YAML file, I can just key it off of the API version, then kind and also the name of this resource. And rather than copying and pasting every single entry in the YAML, I can just say, you know what? I want to change the number of replicas, okay? So I change it from one to two. And finally, I can say, let me merge them together. So I'm going to inherit from the base set of container, base set of YAML files, and then I'm going to merge it with the thing I just modified. And now you can see, rather than duplicating the configuration over and over again, I'm just adding little patches to my base configuration. And finally, if I go to my Customize directory and go to QA, I can say Customize build. And what this will do is to merge all the files together. And if I were right, and if I were lucky, everything works, then we can clearly see that for my deployment now, this is now using two replicas rather than just one. If I go back to the base environment and do the build, we can see that the merge set, it's really just the original set, which is replicas is equal to one, okay? So let me go back to the slide. It's going to go back to the very end here because of the time. Oh, I'm flipping between the two. So just remember, first create a Kubernetes environment, containerize your app with some kind of tool like Jib or build pack, create or generate your EMR files, use scaffold to increase the velocity of your development loop. And then finally use Customize to create configurations for different environments. So with that, thank you so much for being here. If you have any questions, feel free to find me on Twitter, or you can go to my site and find more information. All right? So I think my time is up. And thank you very much. All right, Ray, I don't know if you have time for one question, but... I do, if you have the time. Yep. First, I can see here, Harry is asking, is it possible to add a new public method in Hotswap using the debugger? Not quite. So you cannot add any new Sin interest, you cannot remove Sin interest, you cannot change Sin interest in this case. So typically I would say, always do as much as you can locally, test everything, and make sure you're still writing unit tests and integration tests. And if you really wanna see everything running together, then this is when you deploy into Kubernetes. Yeah, so hopefully by that time, you would have minimal things to change. And last one, so is Customize part of the idea of plugging? Does it merge like Git by finding neighbor lines? It is not. So Customize is really a utility. So you do have to install Customize, for example. So however, in our Cloud Code IDE, we allow you to do some co-completion in this YAML file. So you know very easily what type of customization you're applying. However, when you do want to run and check the output of Customize, as far as I know, you have to go back to the command line and do a Customize build, right? So you can do something like this. And then you can check everything. That being said, you can also apply everything through Customize by using a Qtctl apply-k. This is another way to apply all of your YAML file, but with the Customized YAML file. So it applies all the patches that you applied through the Customization YAML. Like that. Yep. Awesome. Oh, thank you very much, Ray. I'm pretty sure everybody learned a lot about Java in Kubernetes, the best practices. We really love to have you here. And I hope to see you soon in the next virtual conference. Yeah, thank you so much for having me here. If anyone has questions, I'll be on Slack as well. Thanks. Perfect. See you soon.