 Good morning. My name is Arun Gupta, and I work for Amazon in the open source team. How many of you are Java developers? Almost 90% of you. So I think this talk is very relevant in that sense. I've been a long-time Java developer. I started my Java journey back in 1999 when we were doing JDK 1.x at Sun Microsystems. Now, as a Java developer, I'm always thinking about when a New technology comes along, how do I start with that technology? How do I start learning that technology, applying my existing skills? So this is how I created this GitHub workspace. There are no slides, and so this is purely going to be Code-driven workshop here. So in the next 35 minutes or so, I hope to guide you what should your journey look like if you Are a java developer and you want to get started with, say, Kubernetes, essentially. That's sort of what it looks like. Now, if you look at it here, as I said earlier, all of the Code is available on this GitHub repo. So this is sort of the one That you want to bookmark, essentially. And the way I look at it is file issues, send me pull Request, start it, fork it, whatever way you want to Monitor it, but that's sort of where all the content is sitting. Now, as an application developer, what I'm going to do is I'm going to show you quickly what our application looks like. It's a simple spring boot application, just simple hello world. That's the intent, because you know how to build a lot more Fancier and more better applications, essentially. Then I'm going to show you how you build and test that Using Maven. Then we start building that Application that, okay, what is my first step in the Journey for Kubernetes? I need to make a docker image out of That Java application. What are my choices around that? We'll talk about that briefly. Then we take a look at it, Particularly from the Java perspective, what are the things That I need to be aware of? Can I use jdk as a base Image? Should I use jre as a base image? With jdk11, what are the things that I need to be Concerned about? Then we talk about how do I, Once I've built the docker image, how do I take that Docker image and now run it on a kubernetes cluster? What are my choices for running kubernetes locally on my Machine in the cloud? So those are the different Options that we can consider. Then we're going to migrate The application from your development mode, which is on Your laptop to the production mode, which is in the cloud. We'll talk a little bit about the relevance of service mesh. What does it give you? What does it buy you? Why do you need it? And what are your choices around that? We'll talk a little bit about deployment pipeline, because That's a relevant aspect. Now, as you are doing a get push To your application, as soon as you do get push, you want The application to be available in your development or in Your deployment environment. So we'll talk about all of Those different options. So let's go look at our Application first thing. So if you look at app, this Is a directory where my application is sitting, and As I said, this is a simple spring boot application. So if you look at the source, and if you look at my Greeting controller, it's a very simple spring boot Application. All it returns on line 14 is hello. So essentially when you run the application and you hit it On slash hello, it will return that hello response to you. If you look at my palm.xml, which is how i build my App, nothing fancy. I'm using spring boot starter as my Parent. I have the spring boot maven plug-in in there, and We'll talk about this profile a bit later. But so very standard Spring boot application, no special dependency is Required over there. Now, if i were to run this Application, it would return hello. That's not the point, Though. So if you go down here, i'm going to say Maven spring boot run, and i say curl, and it returns Hello. We have only 35 minutes, so you can only do live Demo so much, so i'm just going to walk you through it Essentially. So now the second step for you is, Now that your application is up and running, you want to Get the exact same output from the docker image. Because the first step in your journey to kubernetes is To create a docker image out of your java application. So what are your choices for creating docker image? You can use docker file. That is one of the Bays by which you created docker image. You can use a maven Plugin. That is another option. And there are a lot of variety for the maven plugins that Is available, so we'll talk a little bit about that. Now, as you are using a docker image or a docker file, What should your base image be? Can i use jdk as a base Image, or should i use jre as a base image? With jdk 11, rather 9 onwards, you can create custom jres. So how does that work out? Because eventually when your application is deployed in Kubernetes and you're using a microservices base Architecture, your docker image is downloaded on the fly At a particular node. So it's very important that your Image size is small. So if you use jdk's base Image, then you are unnecessarily bundling the Compiler and everything into the application itself. That's where you start looking at it. Let me use jre as a base Image. So what i'm going to do here is i'm Going to go back to my application here. And i'm going to show you a docker file here. Now in this docker file, let's walk through this Little bit. So first of all, on line 1, i'm Using maven as a base image. And this is important Because now i can give this docker file to anywhere Where docker is available. And i don't need to deploy Or install maven separately. I'm using maven as a base Image itself. That means it will download the maven Docker image. And it will have maven pre-installed. So once i do that, then i can copy my source code. And now i can run maven itself. Now one of the steps that Is commented out here is, for example, i can create my local Maven repository and upload it into the image itself. This will save time by which when the image is being built, It will not download the maven every time into the image. So that's some of the patterns that you can see, how it can Make your image a bit more efficient. Now this is first from statement. In a docker file, you Can have multiple from statements, and those are Called as multi-stage docker file. So for example, in this case, my first from statement is Maven, and i'm calling it as a build stage. Now the second statement is from where i have open jdk Eight jre slim as a base image. And in this case, what i'm Doing is, i'm saying from the build stage, copy my Jar file that was built into the second stage. This is a very important step because you don't want to Have all the classes, all the jar files, everything that Maven downloaded to be included in your Eventual image, which is the final image. So what you do is, you just copy the target directory, Because that's exactly where everything is going to live. You copy that, and then you set up your java options, Which would be very useful if you were to debug the Application using your id of your choice, and then you Fire up your application. So basically you took the Exact same java application, and using a multi-stage Docker file, you have downloaded maven, and you Have used the jre slim or jre slim as a base image, and Created a new docker image for you. Now one of the tools that was introduced in jre k9, or The other two tools, they're called as jlink and jdeps. And because jre k9 can fundamentally change how the Jre k itself is built, so what jdeps does is it Takes a look at your application, and it says Give me the list of modules that your application depends Upon. So if you go down here. So what i've done is, i've created my war file. I rename it to a jar file. Now on my jar file, i'm saying Run jdeps. So take the target slash app.jar, print All the module that it depends upon. Because i don't want to include everything that is Included as part of jdk as part of my jre. What i'm doing here is essentially creating a custom jre. So with this custom jre, it gives me the ability to print My custom modules, and these are the modules that are Required by spring boot as well. So now what i'm doing is I have the list of all the modules that are Specified by this. Then i'm using the tool jlink Saying add all the modules and create my custom jre. So up until jdk 8, there was a standard jre that was being Shipped with java, but starting jdk 9, you have to Build your custom jre. So instead of building the Entire jdk or the jre, now you're building a custom jre Which is very specific to your application, and then Using that as the base image essentially. So if i go back here, if i go to my application now, and If i look at dockerfile.jre here, so all i'm doing is I'm just saying debian9slim as a base image. Then i'm copying app.war into the root, copying the custom Jre that was generated for me, and that's it. From the jre is where my jvm is being available, and I'm using that jvm to spin up my application. So it's a lot smaller docker image, and as we talked about That is super important, particularly because in a Microservices environment, you want the docker image to be Small, so that in case it needs to be downloaded on a Kubernetes host, it's easier to download. Another tool that is very important, particularly from a Java developer perspective is, and that i want to highlight Here, is if you look at the palm.xml, it's Called as a jib plugin. Now in my maven palm.xml, i have a jib profile, and i'm Using the jib maven plugin, this is version 1.1.2, and This plugin kind of fits seamlessly with your entire Palm.xml flow, so what you do is you define a profile Here, now when you say maven package, and if you Specify this profile, it will actually generate a docker Image for you, and this particular tool is Particularly useful because in this case, Think about your spring application. In spring application, How many times your dependencies change versus your Application change? so what it does is it says i'm going To make the dependency as an extra layer in docker image, And the application as an extra layer in docker image. Now if you take a step back, the way your docker Image is created is a bunch of layers all together. And if only a single layer is changed, and when your Image needs to be downloaded, it will download only that Single layer. So by splitting a spring boot Application into dependencies and application and other Dependencies, it makes it easy for you to download only that Specific layer. So essentially your dependencies Which may not have changed, it will not download them Again, but it will download only the application layer that Has changed for you. So what i'm doing here is In this maven plugin, i'm saying use jre-slim as a base Image, and everything that i specified in my docker File is now being specified here. So same image size, but a Little bit more optimized for your java development Environment, particularly for a spring developer. So now you have a basic docker image ready for you. Now you need to convert that docker image so that it Can be understood by Kubernetes. So that's where you Can start learning about the kubernetes terminology. You need to understand that the basic terminology in Kubernetes is a pod. So you deploy a pod. Now, pod is not something that you deploy by itself. So pod is typically done by a deployment. Because in deployment, it's a declarative way of saying What the pod needs to have and how many replicas it Needs to have. So what you say is take this Deployment, the deployment will create the pod, the Pod will have the container, and how many replicas of the pod That you want. And all of those are essentially Your resource descriptions in kubernetes that you need to Create. So you need to understand what the Schema looks like and then essentially create your resource Description accordingly. Now, just creating a pod and a deployment May not be sufficient because all of those, so the pod And the deployment, they're given an ip address, but that The ip address is not accessible outside the cluster. So what you end up doing is you also create a service Description. And that service description, depending Upon how you write the service description, is accessible Inside or outside the cluster. So the three basic concepts That you need to understand is a pod, a deployment, and A service. Those are three resources that You need to create. So let me show you how does that So if I go to the manifest directory here, so I have App.yaml, which is where all my app descriptions are Available. So now in this app, this is sort of how My kubernetes manifest looks like. So if you look at this on The line one, I'm saying this is the app's version one, Line two I described, this is of a kind deployment. I specify some metadata. Then in the spec, which is The deployment spec, I say one replica. And this is one replica Of the pod. Then I specify certain match Labels. That means this deployment will have the pod that Are going to have these labels over there. I keep going down Over here. This is where I define my template for The deployment. And this is my pod spec, essentially. So this is sort of all what goes into the pod spec. Now I'm saying this pod has one container of the type Greeting. And it has the image Which is the image that we created earlier. And this pod also has the label App.Greeting. And the good thing is because this pod has These, what are labels? Labels is basically a key Value pair. So a key and a value, you can pick Them randomly, whatever you want. In this case, I happen To choose app.Greeting. So the pod has these two Labels, this one label, app.Greeting. And in the Deployment, I'm saying match any pod that has these labels. Okay. The pod also Exposed a container pod, 8080, but that is only available In the pod itself. Okay. So my deployment has the Pod spec. So those are my two resources that I wanted To create earlier. And now I also have a service. In the service, I'm giving the service as a selector. And what the service is saying, pick any pod That has this label on it. So app.Greeting, which is Again the same label that was on my pod created by the deployment. So think about it this way from your perspective. You had your java application. The java application you Converted into a docker image. The docker image is what You specified in your deployment spec. And then you create A service on top of that. And the way the service and The deployment are correlated is using labels. And it was a very loosely coupled architecture in that sense. So now you have done a deployment and you have a service. Service is a standard IP address that is available to you that Is accessible to you. Because pods are given IP address, but if pods terminate and if they come up On a different host of your Kubernetes cluster, maybe Given a different IP address. That's not something that you Can rely upon. But service is given a standard IP address that is accessible and is fixed for the duration Of the service. In this case, i'm also specifying A type as a load balancer. And the advantage of that is Depending upon where this service is deployed, whether it Is deployed on your local cluster or in the cloud cluster. If it's deployed in a cloud cluster, say amazon in that Case, it will automatically instantiate an elb for you. So now the service is accessible to you over elb. So you have a service ready to go. Accessible on a elb or a load balancer. And then behind the Scene you have a deployment and then you can scale the number Of replicas up and down. So every time you hit the Service, it's going to pick one of the pods that is Available as part of the deployment, return that as A response back and that is the pod that is serving Your request. Now another part as you need To understand is as you are building your application Which is microservices oriented, typically your application Will have a lot of microservices. And each microservice in that Sense will map to a deployment and a service. So how do you manage all of these multiple deployments And services all together? That's where the concept of Helm chart kicks in. Helm chart kind of defines On how your application needs to be defined. It's sort of the standard way or the de facto way by which You define your Kubernetes application. So if I scroll down here a little bit, actually in my Manifest directory itself, if I look at my app, this Directory is what indicates what a Helm chart looks like. Now this is using version 2. Version 3 is actively Being developed and things are changing over there, But at least for version 2 what you have is a top level Chart.yaml which basically defines the metadata about the Application itself. So you can say the api version, the app Version, what the description of the Helm chart looks like. Then in the values.yaml you define certain values. So think of this as your constants that can be used Across your template files. So if your application has Multiple microservices, that means it has multiple Resource and multiple deployment and services, These values can be templatized over there in those Description files. So now if I go to my templates Directory, that's where all the magic happens, essentially. So here I have my greeting deployment and greeting Service. So if I look at my greeting deployment, essentially, This looks very much like my deployment file that I Saw earlier, but now it's a lot more templatized Because what I'm doing is I'm saying the metadata is Release name dash greeting. Now this release name is the one That is defined in my values.yaml already. Similarly, I can say values.replica account. And the whole idea over here is now you can have multiple Deployment and multiple service files, and by using Template language over there it gives you a standard Mechanism on how you can access or how you can define These values rather. So for example, in my This is my deployment, and in here I'm saying the part Spec has release dot name dash greeting. Now if I look at my greeting service here, in selector I'm saying the release dot name slash greeting. So the chances of having a typo occurring is a lot less Because what you're using is a standard template language And those values are again being specified in a single Value.yaml file for you. So all that is fun. Now what I want to do is I want to actually deploy this Helm chart, and I want to test this application. How do you test this application? So then again, you Have multiple ways of spinning up a Kubernetes cluster. First of all, on your local machine. I want to test the Kubernetes cluster on my local machine. So couple of options. You can use docker for desktop. So if you're using docker on windows, on linux, or On mac, at least on windows and mac, you can enable kubernetes Very easily. So if you have docker running, then you Go to the preferences and say enable kubernetes. And what that gives you is a single node kubernetes cluster That is running on your local machine. So very easy to get Started with and start playing with it. Well, that's one option. Second option is to use Minicube. So you can download minicube, which basically Paired something like virtual box to spin up a single node Kubernetes cluster for you as well. But once you have either Of those environments configured, then you can install Helm chart into that kubernetes cluster. And you can start Testing your application over there. And the advantage is Whatever works on a single node kubernetes cluster, then You can migrate that application to in the cloud where You are possibly running a multi-node kubernetes cluster. Now, in terms of running a kubernetes cluster, there are Several cloud providers. Well, i work for hamson, so My preference is aws. And as a matter of fact, more Than 50% of kubernetes runs on aws. This is for the Latest cncf survey. So all that matters is where You run your kubernetes because you need to have that Wide variety of compute choices and the scalability Of all of that as part of the eks cluster. So amazon eks essentially is our fully managed kubernetes Service. What it gives you is a control Plane and then you bring your data plane and attach to it. And that becomes your kubernetes cluster. Very easy way to get started with amazon eks. You install this cli called as brunstall. We work-stab eks-cuddle and then literally you can say eks-cuddle create cluster. If i just give this command That will create our amazon eks cluster for me. Now i have specified some values over here that the region And the number of nodes i want in the cluster but basically This will give me a four-node amazon eks cluster. And if i go to my terminal here. So now if i say Cube-cuddle-config-get-context it shows i have multiple Clusters running in the cloud. So at any given point of time If you look at this for example i have a mini cube cluster Running, i have two docker desktop clusters running And i have a few clusters running up in the cloud. Now once i have this application running on my local environment Switching that application to an environment in the cluster Pretty straightforward. So for example right now i am running This on a gpu-based cluster and if you see current it says That is the current cluster over there. So i can say cube-cuddle-config-use-context and i can Pick a different cluster. Say i pick this cluster here. Now the context switch to that and if i say cube-cuddle-get Nodes it will show me the number of nodes that are available in That cluster. So really all you need to do is Just switch your context from the cluster that is running on The desktop to the cluster that is running in the cloud and Then just redeploy the application. And that is sort of The recommended methodology as well. So that is sort of the Way you migrate from dev to prod. Now one of the common Things that you need from your applications is you Want to use the right language for the right tool. So you may be using java, you may be using node, you may be Using go. So a wide variety of languages. Because end of the day what are these are micro services which Are basically talking to each other using apis. So the important part is now you want to get observability Into those micro services. So how do you get observability? Well, if you start doing the libraries for each specific Language, so java and node and go, it quickly becomes a Framework explosion. Because then you need to start Maintaining different versions of these libraries, then your Application becomes cluttered. So one of the common design Patterns that we have seen in the real world is where Customers use on why as a proxy. And on why is a cncf graduated project. So we were happy to support it. So customers what they do Is, if you think about pod, pod has one container which is Your application container. But what you can do is in the Pod, you can have a sidecar container which is sort of Sitting along with the main application container. But now the sidecar container and the application Container can talk to each other as if they are part Of the same local host. So what essentially happens is Any communication between the application containers across The pod doesn't happen directly. It always happens through the Sidecar. And so if you think in terms of The logic, the application container talks to the sidecar Container, the sidecar container then looks at the Policies that are being enforced at that point of time, And then it talks to the other sidecar container which Then talks to the application container. And because all the network traffic is going through the Sidecar, it makes you to allow some more observability in your Application. So with on why being the Basis for your network traffic, you can do a lot of fun Things with it. You can start doing things like Canary deployment. So you can say that now The version one of the service is running, i want to Introduce version two. And then when i introduce the Version two, i only want to guide the traffic to five Person of the users or maybe the users who are coming from A certain geography only. So you can start kind of narrowing The traffic to a particular part of the world. Now that is not entirely possible if you are using raw Kubernetes and that's where service measures are becoming Extremely popular. So essentially with on why Sidecar, what you're doing is you're running sidecar across All of your applications. You're running on why across all Of your applications. So what you need is a data Plane on top of it which manages all of those sidecar Containers. And that's where things like Istio or aws app mesh comes in. Now istio works very well on Amazon eks if you were to use it. Sure. I actually wrote the blog post on that on how istio runs on Amazon eks. But aws app mesh is a fully Managed service that we provide from amazon as well. So in this case i'm walking you through on how easy it is to Get started with app mesh, particularly if you're using For amazon eks. Excuse me. So it walks you through how do you set up the iam Permissions, how do you configure the app mesh. So essentially what i'm doing is i'm enabling a namespace that Anytime you deploy an application into that namespace, it Will automatically inject the on why sidecar for you. So it walks you through the entire process. Then just like what you're doing is you're deploying your Application into the kubernetes cluster, you need to Deploy certain app mesh specific constructs. So things like you need to create a mesh, you need to Create virtual load services and deployments and then you can Start doing traffic shifting. So traffic shifting is the Concept where you're saying i'm introducing a new version of The application and guide only 5% of the traffic to a certain Type of users. So you could say if the Users have a particular cookie set, then guide the traffic to Them or if the user is coming from a certain part of the Geography or i don't care of the entire set of deployment, Guide only 5% of the traffic to the new service. So all those things can be magically done using these Deployment descriptors over here. And then there are full details About how you can do that using stio as well. Now the last part that i want to talk about over here is how Easy it is to get started with, say, deployment pipelines. Skaffold is a tool that is available in the open source. Now typically what you do is as part of your development Experience, you want to get started with that as soon as i Make a change in my source code, somebody should Compilate, somebody should build a docker image out of it, Update the docker, kubernetes manifest and redeploy the Application to my kubernetes cluster. And this is all Happening in your local development environment. That is exactly what skaffold is used for. And also essentially what you have is you download Skaffold, you set up your configuration using a Skaffold.yaml and it heavily relies upon convention over Configuration. So you put your files in a Specific location, it picks them up and sets up your build Environment where it is constantly deploying files. So as soon as in your ide, if you make a change, you Save it, it triggers the entire build cycle and your Application is automatically deployed to the cluster. So it's very good for iterative feedback and right Away the feel that you get back. Now that's one part of it. That is good for local Development. Now when you're going into the Cloud, that's where you need a full blown deployment Pipeline. Of course jankins is a good Solution and a lot of our customers use it, but a lot of The time customers also want something that is fully Managed and that's exactly where aws code pipeline Comes in. So this section kind of Walks you through how easy it is to set up aws code Pipeline based deployment pipeline up in the cloud. It's a fully managed service. So there is nothing you need to Download or manage on your local machine and once aws Code pipeline is set up, once again, every time you make a Change, you do a git push, it triggers the entire Development pipeline or a deployment pipeline up in the Cloud and guides you through the entire process. So you need to set up your github token, set up your Iam role, cluster name, and then what you see is how the Code pipeline looks like. So it says here's my source, here's My build, what's being built, and then it walks you through The entire deployment pipeline. And then of course you can See the instructions for jankins x as well. So let's go back through the entire life cycle on how it Would look like if you were to building your application From a java developer perspective. First thing is make sure you Choose your base image wisely. Choose your tools wisely. Are you using a multistage docker file or a maven plug-in or a Gradle plug-in? There are lots of options. Make sure to leverage the custom jre that is available, jdk9 Onwards on how you can use that. Those are some of the things You need to look at. Then you convert that docker image Into a kubernetes manifest file. You can test that on Your local machine, docker for desktop, mini cube, lots of Different options over there. Then as you're migrating those Applications to the cloud, consider amazon eks. This is a managed service up in the cloud. You click, you know, eks-cuddle, create cluster, gives you Amazon eks cluster, switch the context, and then you Migrate your application over there. Then you start looking At, okay, how do i get observability into my application? Because a lot of these services are running. That's where your service mesh comes in. And then last but not the least, think about what are Your deployment pipelines looking like. There are certain steps you need to think about it. Hopefully this repo helps you get started. Once again, as i said, all the code is here. If anything is missing from this, i would encourage filing issues Or send me a pull request. I'm going to be around the rest Of the day at the aws booth. I would love to talk. Thank you so much.