 OK. Welcome, everybody. Thanks for coming to my talk. Topic is on Cloud Foundry and Kubernetes. And I'm pretty sure this is the first time this week you're going to hear about that. Well, I think I have to create a new talk. This is kind of getting out of fashion. So what I will talk about today is you've probably heard many things about integrating and taking most value out of both platforms. Today, I'll try to do a comparison on them on an individual level. So before I go into that, I want to quickly introduce myself. My name is Matthias. I work for a consulting company called Novatech, located in Stuttgart, Germany. I'm also organizing the Cloud Foundry meetup in this area. So are there any people from southern Germany in the room? I've got a few. OK. Hi. Well, in case you didn't know it yet, you are always invited to come and join my meetup. I'm also working as a Cloud Foundry ambassador. So if anybody else tries to is thinking about setting up an own meetup, feel free to ping me. I can try to help and also help with speakers. This is my Twitter handle. I use that quite frequently. I will also publish my slide information and the code samples on that account after the talk. No? OK. So I guess most of you have seen this abstraction layer diagram in one or the other form. So what this is about is these are the various abstractions layer, how developers or where developers can place their code and workloads to run it in the cloud. And by the nature of those things, like the more you get down into that stack towards a virtual machine, the size and the footprint of the whole thing gets bigger, the startup time is longer, and the coupling of the components are closer. Moving up, on the stack, you have a higher abstraction layer. You have more flexibility, and you have a higher distribution. Basically, your things tend to get smaller, and then you have more and more to manage. So what I'm going to focus on today are basically the two abstraction layers of applications and containers. And I'm using Cloud Foundry and Kubernetes to show what it basically means to deploy and run your code there. So if I say Cloud Foundry, I'm referring to the Cloud Foundry application runtime, and with Kubernetes basically the container runtime. OK, as I said, you most likely have not seen those two icons the first time in this presentation. And most likely, a couple of the things you will already be familiar with. But for those, I've just also heard there are many new people attending the conference for the first time, so I hope this kind of gets you started on both platforms. And you get a feeling how they tick and how they operate. So when comparing Cloud Foundry and Kubernetes, you can do that on a couple of different levels. So as a disclaimer right up front, this talk will not be about comparison on anything like below the container level. So no discussions about how does the infrastructure look like, what do you need to do to run and operate such a platform. I also try not to go into any vendor-specific implementations, just stick to the root core of the open source product, and also avoid extensions and add-ons. Kubernetes in particular has a very live ecosystem with lots of new things coming up pretty much every day. And I'll try just not to go into these and really show how the platform works internally. So what it is about, I try to make this a little bit more visual here. So on the left-hand side, this is my Mrs. or Mr. Developer that wrote an app. And wants to deploy this app to the cloud. And the questions they're going to face eventually is probably like, what do I need to do in the first place to get it deployed to the cloud? And once it is running there, what could be my expectation of the cloud? What is that the cloud provides for me in order to run my app better than in the non-cloud environment? So we're going to look at things like recovery. So if my application has a failure and crashes, what does the platform do in order to bring it back to life? We're going to look at scaling or autoscaling. So one of the key features of running software in the cloud for making that workload scalable so it can serve multiple requests and basically scale up and down based on the request load. We're going to have a look into logging. I mean, logging is very important for the developers, especially if they don't run their code locally anymore. So to give them a bit of an insight what is actually going on. Service bindings is like an app normally doesn't work without any services. At least then it's not going to be a very complicated app. So what do you need to do basically to bind and attach back end services like databases or messaging system or other legacy applications? Zero downtime deployment is basically if your application has a failure and you want to correct that, how can you do this or what does the platform offer you to do it in a way that your end users do not experience any downtime and which are the kind of supported platforms that your application can be written in. So I think the challenge will be here to put all this into 30 minutes. I might skip one or the other item on the demo depending on how the time goes. But there is, I think this is the last talk before lunch break and I'll be definitely around. So if you have any further questions, I'm happy to talk about it. OK, for introduction, I mean, this is something I would expect most of you to know. I'm going to go through this rather quickly. This is like the real basics of how Cloud Foundry application runtime works. So the magic command, you've probably heard it a thousand times this week, is the so-called CF push command. That will trigger the following thing. Internally, in the Cloud Foundry runtime, which is like the top left kind of component, a new image will be instantiated, a new container image. And Cloud Foundry is able to detect the application language that you wrote your code in or you can specify that. That will put in a build pack component. This basically serves as a runtime. The application will be placed on top. And then the whole thing will be put in as a container image. Worth to mention for those who are new to the topic is like all that things happening down there is happening without the user interacting with it. So internally, containers, container images are being used with all the container advantages. But the end user is not getting exposed to that. In order to attach backend services, there is the command CF bind service. That will basically give you a selection of various services. And the connection information in terms of a database, for example, the JDBC URL and the credentials, will then be injected via environment variables to your application. And the application can consume that. And the deployer does not have to do anything with setting that up manually. So Cloud Foundry has this 12-factor app built in separation between applications and services. Applications, basically, your code that is running. And services are the back ends, which makes it pretty, pretty easy to understand. In the end, a so-called route will be exposed to the end user. This is basically the URL, how the user can access this, and then basically start to interact with that. Now looking at Kubernetes, I divided this into two parts, because the first thing in order to run something on Kubernetes is that you have a container that you can basically deploy. That means that normally happens in the form of Docker, where you can use a Docker file. And that Docker file will basically specify what is the base image of that container, what is the runtime component that I need in order to have the application running, and basically put the application on top in the end. As you can see here already, the person working with that has a lot more possibilities to configure each individual piece in the whole picture. Of course, this also brings the responsibility to know what you are doing there, because you have also more potential to do things wrong. Now, once this is done, you can push that container image that you have been built into a container registry, and then Kubernetes will be able to basically consume that registry and pull the images as needed. OK, now looking at Kubernetes itself, so basically the equivalent command, there are alternatives, and we're going to look at that. To the CF push command is a kubectl run command, and that will trigger a couple of things. So I try to explain a bit of the terminology here. In Kubernetes, the smallest scalable unit is the so-called pod. And as far as I know, this is coming from a natural term. A pod is basically a group of whales, and you have probably noticed that this docker, the mobidock, the icon is also a whale, which means the pod will reference the container registry and pull the image it needs inside. A pod, as the name already implies, can still hold more than one containers inside. And the rule, I mean it's difficult to describe, but the rule normally is you should only do this if all the containers in your pod have a common dependency on the file system or if they share anything like that. Or to put it in other words, if they are not able to run on distributed nodes, then you should put them into one pod. Other than that, I normally prefer to do a one-to-one mapping. Replica set is basically the component that cares about the scaling of those pods. So if Kubernetes or the end user decides to scale up and down, the replica set will basically do that and take control of how many instances you're running. An object above that is the so-called deployment. This is the one that is responsible for the updates of your application. So if you have one version running and you want to update to a new version, the deployment basically takes care of the individual versions and ensures the smooth transition. In the end, if you want to expose your application to your end user, there are various options, there are service and ingress objects. They care about the network connection from your deployment to basically expose that in various ways to your end user. Again, comparing this to Cloud Foundry, you can see basically the storyline continuing here. The person who deploys the application is a lot closer to various infrastructure components straight away and needs to be aware of them. In turn, there's a lot better control and possibilities to configure each and every piece the way how it is designed. Okay, so I will switch to a demo now and basically I'm taking one code snippet and we'll try to deploy this in parallel to both Cloud Foundry and Kubernetes so you can get a feeling of how the platform behaves or how the platforms behave and hopefully see something you haven't seen yet. Sorry, that was wrong. Okay, so it is for those of you into programming, it is a tiny Spring Boot application. It basically exposes two REST endpoints to the outside. So there is a Hello endpoint and a Failing endpoint. So let's just like personalize this a little bit. It's a little puzzled, so what do the people say here? Is that right? Hopefully, the non-Swiss people probably might not understand. And on the Failing method, it basically triggers a system exit zero. So if you invoke that method, it's going to kill the JVM and I basically need that to trigger a failure scenario and to see how the platform reacts if the application crashes. I also have this get hostname method right here that will basically look for the internal ID in the container of both in Kubernetes and Cloud Foundry to basically indicate that multiple instances of an application are running or not. Okay, so is that readable for everyone or should I make that a little bigger? You just, if it's not good enough, just shout out loud. So I have to go into this directory first and now I've just changed the code so I have to build a new version of the code. I mean, it's only that string that I changed, but it shouldn't take too long. So let's start with Cloud Foundry first. Let's see if I'm connected. I had some network issues here before. Now this looks good. So in Cloud Foundry, as I said, the magic command is CF push and you can do it like a CF push either with some command line switches like here, give it a name and then define which application to take or the alternative to do that is using a so-called manifest file. In that manifest file, we can have a look into that. You specify the same things. You basically tell the name of the application. You can specify things like how much memory it's going to need, how many instances should I have when it starts. The ultimate important thing really is the application. I mean, Cloud Foundry is pretty smart, but it's not that smart enough to guess what you really want to deploy. So these are the two alternatives. And I'm going to trigger that command here and just invoke CF push. And then it will start doing its thing. As I showed in the diagram, it will start downloading that build pack, create that image, and so on. While this is doing that in order to save some time, I will do the same thing on the Kubernetes side. So as we've just built the code and have a char file, we need now to create a container. And this is my docker file that I'm using. So it's basically taking a basic alpine image, which is already a JDK inside. And then the action is basically to copy that char file that we just built and have this as the running process of the container. So I can do a docker build here. And then this one will trigger the application to be containerized. In the meantime, I can see here my Cloud Foundry application has started. This looks OK. We're going to have a look on that later. I have to switch back and forth. So now I have this container image. One thing I'm going to skip here is pushing that container image into a registry. So I will just tell Kubernetes to pull the local one. And in Kubernetes, I have the alternative to say kubectl run, as I said before. In here, I can also specify things like which image should it pull, how many replicas do I need, give it a name. And I have this label concept that I might explain later on. Alternatively, I can also do this through files. So this would be my equivalent YAML file, where I can specify different strategies for the update and which image to take, the name, and so on. As you can see here again, it's more that you can do, but it's also more that you have to do. If you do the kubectl run, it will assume a few things. And I'm going to do that. Before I do that, I will open up a watch command to basically show you what's happening under the cover. So what it basically will do, it will check periodically for the components of a deployment, a replica set, a pod, and a service. Basically, the ones we've just seen in the diagram before. Right now, there is only one service. This is like a Kubernetes internal thing. And in order to get that running, I'm going to execute this command. And now you should be able to see a couple of things happening. So on the top level, we have the new deployment. Below that, we have a replica set. And below that, we have a single instance of our pod. This is basically where our application code is running in. Good. So those two things were not that hard. What I can do now is, for example, ping the application and see if this is coming back fine. So well, the umload yu doesn't really come out well, but I guess you get the point. So this is going to say, continuously, pinging my application and asking my application if it's there. And then I can, for example, trigger that failure. So I invoke a single curl command and trigger the failing endpoint. If that happens, I can see on the right-hand side the request cannot be handled anymore because the application isn't available. So what I could have done here as well, opening some more watches. As you can see, the application is already back. It recovered really quick. So I guess it was not there. OK, so in this watch, I basically checked the application details directly. If I re-invoked that command causing the application to fail, you will see that Cloud Foundry immediately detects that it's switching from a running into a starting mode and will immediately try to bring the application back. This normally shouldn't take so long. As you can see here, the application is now back. If I want to scale the application, the command would be to scale. And I'm going to scale it up to three instances. And I can see that below here as well. So I have one running instance. And I have two which are in a state of starting. And the load balancer kind of knows, which is the one who is currently already ready and the other ones which are not. So as soon as the state changes from starting to running, the load balancer will try to distribute the incoming requests to the various instances. And you can see that now by the changing ID of the container. So we have three instances now. And the logic inside will basically just iterate over the three. So if my question is, what happens if I kill the application now again? So I'm invoking the failing endpoint again now. And what basically happens, you will see that one will go into the start of starting while the other two are still running. And Cloud Foundry will isolate the broken components and just send the traffic to the healthy ones. If I do it again and again, we should be able to see that the index of the application is going back to one. I only have one healthy app at this moment. And Cloud Foundry knows that and tries to put the load there. I need to hurry up a little bit. I basically want to do the same thing here on the Kubernetes side. What I need to do first is I need to expose the application via a service. And I'm going to load balancer service, which will be listening on port 8080. Now you can see that this service has been added. It's basically the one here. And now I can try to do the similar thing here. OK, so this is coming back. What did I want to do now is scaling. So I have also a kubectl scale command. And I want to scale this up to three instances too. And so it's scaled. And you can see it creates some containers. And the containers are running. But you can see something here as well. Even though Kubernetes scaled up, we're not getting responses straight away. And the reason for that is, and this really explains the difference between the abstraction layer of container and application, very fine. Cloud Foundry has a mechanism with the build peg. The build peg is like a language how it can talk to the application. And then it can know when the application is ready. Kubernetes is not aware of anything which is inside the container by default. So it doesn't know what is running in the container. It can only see if the container process is running. And once the container process is running, it will start routing the traffic to that. However, of course, this is also configurable. I just need to, this is a bit of typing. And I'm going to take a copy and paste mode to do that. So the command to change the configuration, there are various ways. I'm going to edit the deployment simple web. That basically gives me the full YAML representation of how the deployment is being configured. And in here, I have my container specification. And if I enter that now, so I now have that probe in there that will ping the endpoint of hello. And only if this endpoint is healthy, Kubernetes will, sorry. OK, now a couple of things are happening now that you can see on the just over my head. It has, Kubernetes has now created a second replica set. Because as we've changed the configuration, it's basically a new version of the application in terms of Kubernetes. And it automatically uses the new one and try to phase out the old one. But it will leave the old one running as long as it can be sure the new one is alive and working. So if I scale, I'm just going to scale that back real quick. This will be, it has two of them are terminating. And the index will go back to one. And basically pointing all the workload to the single remaining application instance. And once this is done, I'm going to scale it up again. OK, now it's going to create the containers. But you can see the status is running. But it is not ready. So that means that mechanism is now active that Kubernetes tries to figure out one is the component in a ready state. And only then will it start to route the traffic. So you can see this by this alternating IDs here now. So now they're basically on the same level. I have three minutes left. What I'm going to do is trying to show you the patching mechanism. So what I'm just going to do, I'm going to comment that out. And I'm going to add like a little versioning statement, which makes it easier for display. So I've fixed the application. And I've changed the endpoint of the message. So I need to do a new build. And basically, initially I said I'm not going to use any extensions. This was a bit of a lie. I'm using one extension, which is called, which is just a script extension for the CF commands, which is called blue-green deployment. In a very pure way, you have to do this manually, basically deploy a new application, run them in parallel, and then move the routes from one to the other. This is just basically what this application, this extension covers for you. So I don't consider this as cheating really. No, this is blue-green deployment. OK, so now this is going to start doing its thing. I'm just going to wait here real quick until this has been fully deployed. Because what I really want to show you is that when the application basically switches from the original version to the new version. Pardon me? Sorry, I can't hear you. No, this is not the Kubernetes. This is the Cloud Foundry side. The Kubernetes is here. And I mean, in the meantime, I can do a Docker build with the new code that I have just created. So they are in the starting mode now. OK, so it now has changed a couple of routes and does that thing. And now you can basically see that it is now running on version 2. So normally, you wouldn't change that endpoint in that way, but I just wanted to symbolize to you in this way the end user will not notice any downtime. The application will straight continue to work in a desired way. And if I curl the broken endpoint, like the fail method now, it will just return fixed. And the application continues to run. Same thing on Kubernetes side. I can edit my deployment again. And no, this is wrong. I hope this is fine if I run a little over, as there is, I think, nobody waiting for. If you're really hungry, you always can feel free to leave. I don't want to stop anybody from getting lunch. So same thing is happening here again. Kubernetes has basically built now a third replica set, basically a new version of the application. Once this is ready, it will take out the old ones like it is now and has also switched to that version 2. So I'm going to skip the other things, go back to my slides, and do a quick recap. So basically, what you should be able to take away from this is on a functional level, there is definitely an overlap of what Cloud Foundry and Kubernetes can do. From an end user perspective, there is, however, a significant difference. And what I really like about Kubernetes is to life edit that code or that conflict in that yaml notation, this built-in zero-down time mechanism, and that large functional scope. But that brings along, you need to know all the various granular configuration options that you have, and you need to have that skill about Docker and Kubernetes. Yesterday, I saw a good presentation here about the build packs and doing things wrong with Docker files. I mean, this is something you will not get in touch with if you don't operate on the container level. Now, on Cloud Foundry's side, the clear strength is the simplicity. I mean, it's really easy to pick up and understand. It has the power of the containers, but without exposing that to the end users. And this gives a really fast application to platform thing. So build packs, I basically said a pro as a con. It means that they're awesome, but if you have some very exotic language and there is no build pack for that, that might be a bit of a problem for you, but I haven't come across that really yet. You have less configuration possibilities. The question is if you really need them. And the blue-green deployment is still, I mean, it works technically absolutely fine, but you need to configure a little. So yeah, in the end, I mean, right now in this demo, it seemed pretty quick and easy how I put that container together and deployed it in Kubernetes. When I started giving that talk half a year ago, I put it in a different perspective. Now that I worked with Kubernetes for half a year on a project, I'm seeing things a little different. If you think about doing this really live and in production, you will suddenly be exposed to things like an image registry will become your single point of failure. So if that image registry goes down, your entire build and deployment pipeline will be stopped because you can't push the image, Kubernetes can't read. This is something you're not going to get if you're working above the container level. Also, questions come in on who is responsible for patching the images and who's doing the vulnerability things and so on. So all those things are basically shielded by if you do something like Cloud Foundry. In the end, the good news is both of them work really, really well. I can't say I'm in favor in any one of them. I'd love to have them both. I'm really excited to see that project I really think going forward because this is basically something I think goes into the right direction. So now I'm going to just go over this. I'm going to leave you all out for lunch now. I want to say thanks very much. One quick thing, on my GitHub account, which I will also link, I have the sample code and also the sample instructions of all the things that I did here now. So in case if you want to play with that by yourself and just get a feeling of how the platform works, you can easily look into that. And if you have any further questions, I'm probably going to take them offline because I run a little bit over. So thank you very much and enjoy the rest of the conference.