 Well, hello, welcome to another Dev Nation live. I'm actually here in Milwaukee hanging out with a group of people talking about Istio and it turns out We're in the central time zone I didn't realize that until just a moment ago But we are live and we're ready to rock now and the good news is we're here to talk about Kubernetes Kubernetes patterns specifically today We have Roland Haas here who's coming to us from Europe and he is our resident expert on all things Kubernetes He's been given this talk now all around the globe and it actually is a very exciting topic I'm certainly very excited about Kubernetes. I know the folks here in the room We're excited about Kubernetes and Istio in particular So we do have a live audience here with us But we have several hundred of you on the line looks like almost 600 people right now are on the line So we want to make sure that we get your information Are we get your questions in the chat tab? So keep that in mind I'll be trying to answer questions in the chat tab chat tab in real time But we'll also collect questions for Roland at the end So he'll kind of be doing his presentation and then we'll ask him some questions verbally as we get towards the end of our session All right, so hopefully you guys are ready to go Roland. It's over to you now Yeah, thank you very much. Thank you for the introduction. Welcome everybody. It's really awesome that so many people are joining here For some Kubernetes goodness. So let's get started okay So this talk is about Kubernetes patterns and of course before we start into the concrete patterns Let me give a brief overview of Kubernetes itself what Kubernetes is so in case you do not know about Kubernetes it's an open-source container orchestration system or platform which has been open source by Google four years ago, so and actually, it's all about orchestration of containers or Docker containers mostly and Yeah, our crustacean system is about scheduling which means that you want to distribute your containers over a whole fleet of nodes that's is one of the most important tasks of such an orchestration platform and Other high-level feature is that Kubernetes provides horizontal scaling Which means that you can have many copies of your application running on different nodes And also when it is responsible for scaling up and down and also has a way to do auto scaling So to use some kind of metrics to find out how many pots are really needed a Very important characteristics of communities is that it's a feeling This is due to the fact that Kubernetes itself is a declarative resource centric platform, which means what does it mean? So and it means that you do declare a kind for target states So you declare the state which you want to have off your application So how many copies you should be running and what wears the image for for your application? You declare this if you hand this over to Kubernetes and Kubernetes actually then checks what is the current state? and compares it again this target state and if this the current state and the target state differ then Kubernetes does a reconciliation to get to the target state So there are some controllers running in the background and there's constantly monitoring the system and try to get to the target state And if you want to change something you just change the target state that's how Kubernetes works internally and everything is about these kind of declarative objects also called the so-called resource objects Another nice feature of Kubernetes is as a service discovery You know that we all live in the kind of a microservice world so we have a lot of different services which depend on each other and Of course these services need to find also each others and goodness helps here goodness does this by using DNS that a domain name service So you just look up another microservice by its name and everything is performed by Kubernetes So goodness manage this DNS server And then one of the other the last high-level feature I want to present here is about installation updates and rollouts and rollbacks So Kubernetes helps you to Get your applications out to the cluster. It also helps you to make updates it knows very different update strategies as we will see in a minute and This again is all done declaratively, which is very very nice and if you access the Kubernetes cluster There's a central API server which exposes resource-centered REST API Which means you have very standard code operations, which is about creating and updating and deleting these resource objects and Then It's that yeah, that is everything is taken out here. So it's your resource objects They have a certain kind of type you will see some example in the second So let's go to the next part. So actually this is where your high-level Kubernetes introduction you will see much more in the in the for in the next slides then But before we come to this I want to just explain shortly what I mean with design patterns what are patterns you So a pattern actually is a kind of a repeatable solution for a software engineering problem in our case Repeatable means that it's not really a very concrete solution for a concrete problem, but solution for a whole problem class so you can really use this kind of pattern in different circumstances in different contexts and This is very very where patterns come from and yeah, let's have a quick look into the history of patterns So there was a very famous book written in 1979. It's about a pattern language from Christopher Alexander and his co Workers and actually this is not about software. It's a really about architecture, which is very interesting and you find a Several hundreds of patterns there which are different in granularity So there are patterns like cross country fingers, which do describe how you should design whole cities But also very fine granular patterns like for example one pattern is called different ceiling hates Which means that you should architect your house in such a way that your the rooms have different ceiling hates So this gets more interesting and these patterns everything have some name They're interconnected and they build some sort of a language Which is quite nice because with this language you can easily have a common ground where you can continue on and This kind of ideas swapped over then in the 90s to this very famous book I guess everybody of you already have heard about it or you read it Maybe it's about called design patterns from the gang of four and it's yeah, there are patterns like singleton Like a factory like a delegate and you know if I tell to you what's about a singleton Then you already know what this means. So we do not have to explain the concept behind this So we have all on the same on the common ground So this is a nice thing about patterns and then there have been a lot of many other books about patterns about enterprise application integration patterns and also more specific topics like for Kubernetes itself so actually by accident and my cool my colleague Belgium has started a book about Q&S patterns And I'm co-authoring here and what I'm going to present you now in the next The remaining 20 minutes or so are really patterns out of this book Of course this book holds much more many more patterns, which I cannot explain here because of lack of the lack of time Yeah, and the interesting thing is that Q&S itself. It's an application. It's a platform it's a Orchestration platform as already mentioned and it already Provides a lot of patterns out of the box so implementation of patterns So it's some kind of the core features of Q&S itself are patterns. They are the collected experiences from Google and So it's the book is a little bit not so about only about patterns which are on top of Q&S But also describes the patterns which are inherent to Q&S itself Okay, so this about the intro Now, let's let's get started. So first of all Yeah, every pattern. I'm going to show you have a very simplistic structure So for the sake of this presentation So first of all, I will give you some kind of a the problem describe it the problem Which is going to be solved by the pattern then and each pattern has the name and the solution for that one And if you're interested in this kind of theory behind patterns there's a very very interesting article written by Martin Fowler and there are different formats about patterns and Yeah, I can I recommend this article if you just want to know about more about stirrer patterns and how patterns can be written there Okay, but now really let's get started. So first of all they're about Let's talk about the foundational patterns And as I mentioned, these are really about the core concepts of Kubernetes itself and this pattern is all about how you can create and manage applications with Kubernetes And it's called automatically unit and this unit is actually exactly this kind of resource objects I have mentioned in the beginning The most important of these resource objects is a pod and this is atomic unit of Kubernetes and it's a Collection of one or more containers actually Then on top of this there are services which are the entry points to these pods so how you can reach them via the network and Another very important concept of Kubernetes is around meta data of all of these objects So every object can carry labels and annotations as meta data And you can group these resource objects also into namespaces to separate them So how does such a pod look like as I mentioned a pod is the atom. It actually gets an IP address So everything in Kubernetes is running on an internal Network which is hidden from the outside. So typically you cannot reach these internal network easily Then as mentioned pods can carry labels, which is very important as you will see in a second Then you have containers running within this pod. So actually it's a collection of containers that you typically Mostly you have pods with one container also, but there are certain situations and where you are using multiple containers within a pod and all of these pods are sharing certain characteristics Namely, especially the network space. So every pod Sorry every container within this pod can reach the other container via local host and over a pod And of course they have to share out of the pods there and they can share also files data via volumes So this is not done automatically while you can configure such a pod to share file data here Very important aspect here is that such a pod has a female IP address Which means when the pod dies because of some outage or For any reason and it comes back again, then it typically gets a different IP address This is important. So this appear does not say this or you typically or Actually, you never access a pod directly With this IP address you need a kind of abstraction over that as we see in in the second But before we come to the so-called service, which is actually the abstraction over the pot for reaching it There's another concept, which is very important, which is the so-called replication controller replica set, which I will show you in a second before that I have this slide here which shows you how such a pod declaration looks like so you have here a Structure which is common to every resource object in Kubernetes in that case you have a kind You see there on the top you have a section called the meter data. So, sorry, there's some yeah There's some some Ford and the indentation. So actually this yaml file has then server section like meter data and a spec section Which is important. You see this and in every resource object there and The spec section then has a content list of containers and they have an image name which references a Docker image They have a real name You can expose ports from a former container and as I said, you can have multiple of these containers But this is a very simple Declaration of a pod you in the real life or in the wild you will see much more complicated versions of this declaration Okay, but now to the next part namely to the replica set because such a pot is just running and if it if it dies Then it's gone. Yeah But I said one of the characteristics of Kubernetes is the self-healing which means that Pod gets recreated if it dies and you need some kind of entity Which is responsible for this and this is exactly what a replica set is for it's really responsible for managed pots and very very important number here are the number of replicas and the number of Replicas simply states the number of copies of your pot. It should be kept A replica set also holds a template for creating new pot So actually such a pot declaration which we have seen in the slide before is embedded in a replica set declaration so that a replica set controller which has to create a new pot knows how to do so and Then finally you have a label selector for choosing pots which is Important so that the replicas that controller really can find out how many pots are really running and compares that to the declared number of replicas and if there are less numbers than replicas then it just creates a new pot Otherwise it just kills one. Okay Here you see an example how this replica set looks like so you have the selector You have the pots here and actually we have specified we want to have three replicas We have three pots and that's fine everything and you see the selector and also the name labels which are attached to each pot Okay, the final resource which I want to show you here is it's that The service which is again as I mentioned the entry parent for a set of pots so you can Again, it's the pots which are behind the service are chosen by a label Interesting is that these service has a permanent IP address So it's kind of a virtual concept which doesn't go away so services always running such has not known lifecycle in that sense and Actually, if you access a service over his IP address it makes a kind of a mini load balancing back to the pots and does some kind of around Robin a Distribution to the pots Yeah, that's many is and if still you have internal IP address And if you want now exposed this service to the outside of the Kubernetes cluster, you need then another object Which is then so-called ingress Object and there you have an external load balancer which picks it up and delegates the external requests to the services Okay, this is again. It's only a brief overview of the of the core concepts. So they're Sorry, I have to go back one one slide So there are many many more of these objects you see that the pot here's in the center but you have many objects and Which are managed by Kubernetes, but the way how you manage them is always the same So you have this kind of unit this automatically unit you can give it over to the API and then Kubernetes takes care about this Okay, now, let's switch to the next pattern and this is all about how you can deploy your applications and how you can update them and how obviously how this works with Kubernetes and You have to distinguish between a declarative with an imperative deployment What I want to talk about is a particular to phone where you say again You just specify your target state and how you want to have an update to be done Deployment is again other resource object from two and it is it holds again a template of a pot But if deployment starts does not create the pot. It just creates a replica set on the fly So that it manage Multiple replica sets so one replica set for each version for the old version and for the new version And so it can also do a easy rollback to the old version by just activating the old replica set as We have seen a replica set holds the number of replicas and the deployment just varies these number Just to to ensure that the old version gets get scaled down and the other one gets scaled up It knows about different static cheese update strategies out of the box You can declare it simply as a simple field and what's also interesting about this deployment is a side note So deployment is the resource object which has been inspired by OpenShift So in and in case you don't know OpenShift OpenShift is an enterprise distribution of Kubernetes made by Red Hat and Actually, it's it's curious in its core, but it's some extra features on top of it And the deployment which I'm describing here now was really one of the first features of open shift Which was not present at the time, but then it was Yeah, it was so successful a deployment company within OpenShift that the whole community that decided also to implements a similar concept within Kubernetes and it's a good example how this kind of Interaction between the communities of communities as well as communities are really interwoven and really exchange ideas in both directions Okay, now let's have a look at the different update strategy. So the first one is a code so-called rolling update and the good thing about a rolling update is that you can Realize a zero downtime update so you can realize that your service can never gets down And how does it work? So actually as mentioned you have a deployment and you have a replica set This is the blue one at the top represents the old version one zero and then you say you specify the new version and The deployment just does scale down one by one Scale down from the old version and scale up from the new version and this one by one So it's scale scale stars one container from the old version and scale up the new one So that at any time One of the container is either one from the old version or for the new version is running And the service actually just does a dispatching to both of them This works perfectly fine for status applications, of course But for sure if you have some kind of a database behind and your update needs some kind of a schema Update on the database itself Then you have to ensure that both versions can work at the same time on this on the same database scheme Which is Possible so you have but you have to help here as application developer You have to to make this kind of support for for allowing both versions to run on the same database schema So it's not for free a rolling up this just want to mention that one The other one the alternative for on rolling up that is a fixed update where you just say, okay I'm just tearing down all of my old versions then and Switch off the service and fire up the new ones and of course in this case you have some kind of a downtime But you can in this in the time in between The before you scale up the new version you can of course make any scheme up that you want to Let's go on and then there's some kind of variations like a canary deployment and this is actually not it's Not directly supported by by communities as a as a concrete deployment strategy But you can easily do it on your own by just using two different deployments and here in this case you Permanently just scale up one of the services and scale down Maybe one of the old service and run them both in parallel and then you can of course check whether you your new service Works as you like you can test some kind of new features canaries And if you don't like it you can't get back the old application to its full blast and throw away the new version Of course the same like applies like for the rolling update that your needs to both versions need to be able to run on the same database scheme and then finally we have a blue-green deployment which is a Little bit of mixture between rolling and update and the fix update So here in that case you run both versions in parallel Actually, your new version even could run up to a different database if you like to and you can test everything on this new Version whether it works like you want to and then do just an atomic switch from the old Version to the new version so you cut off the service so that it's Does not delegate back to the old application anymore, but just to the new one and this can be done in kind of atomic fashion The drawback here is of course that for a certain amount of time you need to have the twice the capacity For your application, so you need in that case you have six containers running instead of three and Here's a nice nice picture drawn by my by vision itself And this is really nicely Graphically demonstrated several Studies which you can choose from so you see at the rolling update that we create deployment and Blue green blue green releases you see that in twice the capacity Typical it this means also twice the cost for this time and Also the canary one Okay, that's about the deployment part now. Let's move on to our next category of patterns, which is about the structural patterns Let's have the first one and this is all about the The problem how you can initialize your applications actually and therefore there's a so-called init container or initializer and This init container is also part of a pot Which means it's also a container you declared again in your pot manifest And this is a one-shot action which happens before your pot starts So it's actually it's a container which also finishes in a after a certain time and only when this pot has been finished Then the real application containers are started Important characteristic of these in its containers should be that they're writing content because your pot can go down anytime and can go back again So that you can run the in the containers multiple times without any side effects and Important also that the container has its own resource requirements So you can declare how many memory and CPU such a container requires the same like for the Application containers, which we describe in yet another pattern, which is called predictive demands But unfortunately, there's not much time to show that one But here you see also in this graphics how this works So first of all, you can have multiple in the container So you run one in container after each other the init containers are really good for example for doing some kind of pre-processing like initializing configuration and so on or initializing a database schema a database Something like that and and then the last in a container has been finished successful Then all the up containers are starting in parallel So this is in a corner and you see it in many situations in the containers are very very useful and a very common practice to use for initializing initializing your applications Okay, then let's go on to the next pattern, which is probably one of the most famous patterns in humanities This is a sidecar and the idea about the sidecar is really or the question the problem Which is this pattern solves is how can I extend the functionality of an existing container and? It's about one time collaboration of containers As we have seen a port is a perfect vehicle for for realizes such a sidecar because all the containers within a port They are sharing networks and volumes and so you can Do something like this that you have here two containers running in a pot So you have a main container which is a plain HTTP server or Node.js server We'd actually serve just data from the disk and this is just dedicated So then this main container doesn't know anything about updates of the data which it serves But you could just can easily easily stick a sidecar To this main container which does a periodical check on some kind of a git repository on the outside and updates the local Data on the disk and so you can create you can add a new functionality to your whole application Namely as a periodical update of your data, which you want to serve Without changing the main container itself. This is very easy because there you can have can make some composed stuff and create More complex applications out of simple containers and in the book We have also a very easy example how you can do this really easily on your own This very example is a skim in the book If you're more interested there Then we have some kind of specializations of containers one of the SSO code ambassador Which is just a way how you can decouple the access to the outside world for a container so it's also known as a kind of a proxy and typically infrastructure services like circuit breakers or tracing or Even so many of the service mesh out there also are realizing There are infrastructure services via ambassador or proxy And this worked like that your container your application container just does not contact Connect directly to the dependent service it wants to reach out But just on a port and local host and then you have a site the container which is listening on this host and acts as a kind of a proxy to the outside and this example It's just a cache which is added to a database access The opposite of ambassador is the adapter and this is simply how you can decouple access to a container from the outside world Which means that you can Provide a uniform access to your application a good example here is monitoring So for monitoring you can let's consider Let's assume that you have a monitoring system like promisos and promisos just Expect your monitoring data in a certain format But not all your applications that you can deliver this format But at the end you just add a adapter in here which transforms the proprietary format to the expected Data for promisos. There's also a very nice thing and you will see this also quite often Okay, let's go quickly to the advanced patterns. We are Which is then either step further and here it's The ideas how can I extend the platform itself without changing it and there the idea of a custom controller comes in and this custom controller works quite as Yeah, it's a entity which is running in the background it's a service running in the background and it registers for certain community events and if any of the of the resources changes or Over the API. So if something happened there, then these controllers gets noticed Notified sorry and they can react on these changes and perform certain stuff like for example this kind of reconstitiation this This works like that Yeah, as I mentioned that they are listening to the API that makes the state reconciliation leation and That to make the current state like the declared state and they're often used together with custom resources Which is one kind of the of the next pattern, but it's It's tend to be that there are two kind of categories of controllers There's one of a simple extension controller, which really extends Kubernetes about new features But there are more and more kind of application controllers which really brings kind of business logic with controllers into Kubernetes and Combined with an application specific domain, which is very interesting and this is something which is currently emerging more and more as a pattern Let's talk briefly about the custom resources So custom resources about a custom resource definitions, which are managed by Kubernetes They can be so you can create your own types of data You can access them with the QNAS API as any other resource and then use a custom controller So this is the looks like like here you just defined Type custom resource definition and they just defined the name here in that that case you have a Prometheus kind of option So after you have registered this CRD then you can use really new objects with a kind of Prometheus in that case and of course to make sense of this Prometheus resource, then you have a controller running in the background. It's just reacts on these custom resource Prometheus Here's an example how such a custom resource actually could look like And you probably guess that this resource really is used for installing Prometheus in your cluster So if you want a Prometheus server, you just Install the controller issue such a resource and then you're done Okay, so the final slide here is a Brief about operators because there's also some kind of a pattern which has been immersion It's actually not much more than a combination of a custom controller and custom resource so it's just another synonym for them and Actually, there's currently a very nice framework by Chorus, which provides an operator SDK for For creating your own operator. So you can just use this. It's a go-based SDK So if you want to write your operators in Golang, then this is where the place to look for it There's a lifecycle manager which allows your operators to be updated like applications and to be managed And then there's also metering which is about monitoring the usage of your operators Okay Here's the final final picture as I mentioned you have a whole scale of controllers You have easy controllers which even work without any custom resource But you can have also much more sophisticated controllers like operators or you can go even so far that you for example have a resource which describes Maybe kind of a Kafka topic and then just deploy this topic And then you have some kind of messaging already in place the same for enterprise application occasion to deploy integration custom resource And then you have some kind of camera runtime created automatically. So these are the ideas which emerge more and more Okay, so I think we are at the end now So actually I could only give a very brief overview of some of the patterns You find many many more of them in this book if you like. There's also in free white paper from reddit Which you can download which also have a summary of these patterns This will be the link to these slides will be in the the published Version of this slide deck and then there of course, there are also many many Free books awesome books written by reddit also have the slide then in the published slide So it's about like is your service measures you can get all sorts of various aspects of community So probably work and can tell you more of it. So Yeah, so that's that's that's all I have for now. So All right Well, you wouldn't believe the volume of questions we have had in the chat tab over here to commit tab It's been about a million. I'm and I tried to answer as many as possible Do you have any specific personal advice or interest in how to run a database on a Kubernetes cluster? Object cluster, do you have any specific patterns there that you think are appropriate that kind that came up a lot running my database even on my story? Yeah, actually, I think this is one of really the very interesting topics is about stateful applications So like a bit like a database and actually I have not really a complete Recommendations actually there are two camps actually one of say okay We run the database outside of the cluster and just access the database from there But more and more people I really want to have everything within the cluster and then there are of course specializations for for databases like for Postgres and other add-ons which allows you to to to run Such stateful applications within the class and there's even a resource on stateful set which allows you to to Configure this one, but actually I have not really a complete recommendation Okay And then there was a neat question related to you know You mentioned controllers and the use of CRDs and the concept of extending the platform APIs with that Could you expand on it a little bit more because I think that was a point of confusion for some folks Like what do you do with that? That's the primary use case Yeah, actually one one good example is That you have some kind of custom resource for describing in other application you want to install like from with us or like HD database But another example would be for such a custom controller is that you might want to expose a port with an ingress object So actually you have seen this kind of services and you could imagine that there that you can put a kind of metadata on the service like exposed to and Then you could have a custom controller running in the background But just checks every service for this label and if this happened it creates some kind of Route or ingress object on the fly to expose this service actually there are certain controllers like this Not the example of such a custom controller How you would what you could extend is to have a label on a conflict map which points to an application And then every time a conflict map, which is an abstraction of a configuration file Changes then you could have a controller which restarts your application So you can kind of a hop reloading typically just by adding this kind of new controller So these are the simple controllers how you can extend the functionality of the platform even without custom resources Okay, and we're actually out of time unfortunately We're supposed to keep these to 30 minutes and I appreciate that we have had a lot of people today We had I think 700 800 people that showed up though the last question though I'll make point a point of there is someone who's talking about resource issues a social worker nodes and JVM Workloads Java workloads and so I know you know this Roland and I've added a link to my Java Docker fail Presentation so just look for Java Docker fail on Google. There's a blog post on this. I have a whole presentation on Google Yes, by default the JVM will eat all your resources and blow things up So that's covered in the intro class that we have Roland. Do you have any comments on that? Yeah One comment is that if you have the chance to switch to Java 11 then you should do because now it's much more aware of containerized environment for for the new Java version, which is has come out I think one week ago and actually yeah if you're running Java 8 in the container you should really be very careful and check out births presentation or the recommendation there Where where you really should set the maximum memory to your JVM when you start them and that you should really Put your Java JVM into a kind of a cage yourself Otherwise you will blow up your containers for sure Yeah, well and Roland has some nice scripts in his base images that he supports that deal with that specifically right the That go out in there and and look at the seed group settings to ensure that your JVM doesn't blow up So it is tricky, but it's very solvable once you learn the trick Awesome. Yeah. Yeah so much Yeah, all right. Thank you very much everyone Thanks