 First of all, thank you all for coming. I know this is the last talk before the lunch break, so I hope I can make this as interesting as possible and have you not all running out and grab the food. This is the name, as I said, it's bring your own code and bring your own container. My name is Matthias Heusler. I'm from Novatech Consulting. That's a consulting company in Southern Germany. So, yeah, quickly about myself. To the right, this is like the suit and tie version of me that you don't really get to see very often, but I guess as a consultant, you're supposed to have such a profile picture. What I'm doing in my regular job is like, I would say, cloud-native consulting, so I help clients to do migrations to the cloud from their legacy applications or start greenfield approaches on microservice development and bringing that onto a platform. Besides that, on a community level, I'm running the Cloud Foundry Meetup in the area of Stuttgart. So, is there anybody from Germany or Switzerland in the room? Oh, quite a few. Okay, so if you ever happen to come by, just let me know and I'll try to make sure we can set something up. Besides that, I'm also teaching cloud-native development on two local universities. I don't have a photo from that. And if you wanna reach out to me, you can, that's my Twitter handle down there. So what is this going to be about? I'm pretty sure you have seen those two symbols today, maybe yesterday or the day before that. Cloud Foundry and Kubernetes, it seems to be one of the hot topics at the moment. And I'm basically using both in most of the activities in my daily job or also in educational level. So today, I would like to give a bit of an introduction on both of the technologies and compare them side by side. So, you've probably seen there are many talks around that topic on the schedule for the CF Summit and you will probably see various variations of running them side by side and one on top of the other and integrated and whatsoever. So this comparison that I'm gonna do is gonna be more like on an isolated level. And now last year in Basel, it was announced that Cloud Foundry now is basically being split into like an application runtime, which is the former Cloud Foundry itself and the new container runtime, which is the Kubernetes-based architecture. I'm still struggling to adapt to those terms and most likely I will always fall back and say Cloud Foundry and Kubernetes throughout this talk. So when I'm talking about Cloud Foundry, then I'm basically talking about the application runtime and whenever I say Kubernetes, that's basically pointing to the container runtime. Okay, so first, before we start getting into details, just a little disclaimer what this talk will not be about. So it basically can be seen as a bit of a complimentary talk to what Dr. Nick just said. So this one will not be about infrastructure components and things like what is the footprint of my installation, how many VMs do I need and how do they like failover and scale and all that. So it doesn't really go below the container level. And this is also not going to be about things which are outside of out of the box. So I know there are many extensions and add-ons on both Cloud Foundry and Kubernetes, but it's gonna be hard to make a comparison if you include all them. So it's really like focusing on the core functionality of both technologies without any vendor-specific implementation and without any extensions. So what it really is about is about the end user experience. So this is my end user. This end user has an application and wants to bring this application to the cloud. So the things we're gonna look at is, first of all, how difficult or what steps are kind of involved to deploy the application. And then I have, of course, certain expectations to the platform and this will basically cover the various aspects of it. So how does things work with automatic recovery in case of when my application crashes? How does scaling or auto-scaling work? Logging is, of course, a big topic if you scale also aggregated logging. And how do I bind services to my application? How can I patch an application that is failing without basically exposing any downtime to the end user? So like zero downtime deployment and what kind of runtimes do I have supported for my application? So given that I only have half an hour, I will most likely not be able to cover all of those things but I'll try to put in as much as I can. So before I demo, I wanna give a bit of a high level introduction to basically the basics of both concepts. This is like a beginner levels talk at least the way it was flagged so I'll make sure I'll have everybody on the same page. So this is the bit I would think people should be familiar with how the way how Cloud Foundry works is like I execute a CF push command. What then will actually happen is that my application will go into the Cloud Foundry framework, so to say, a container will be built, a build pack will be placed. Build pack basically means a runtime that supports the implementation language of my application and that will be wrapped into a container image and then be treated as an immutable artifact. So worth to note is like from an end user perspective, I don't really see the container level unless I really want to. So my interaction point is really the application level. Additionally, a very important concept I have to split between applications and services. So once I have deployed my application, I can bind them to the services. In this case, I tried to like make an example with a database or a messaging system. Okay, so once it is deployed, forgot about that, basically I'm gonna get a route or URL to access my application. So now on Kubernetes, if you're coming from a Cloud Foundry world and start with Kubernetes, that is what at least how it was for me a while ago, it's two different steps that you have to do there. So if you have your app, in the first place, you have to build a container image by yourself. So you basically need a base container image and a runtime or a base container image which already has that runtime. So normally we specify that through a Docker file that would create your container, put the runtime inside, and also the application. So seen from that, you have more different points where you have to interact because you have to provide the image, you have to provide the runtime, and you have to provide the app. That also means you have much more control of how these things are gonna be handled. In the end, you will have to kind of push this to some kind of a container registry. This is where all the images are being stored. Now, part two, this is really about the focus on Kubernetes. The equivalent to a CF push would basically be a Kube CTL run. And Kubernetes has a lot more objects and I wanna describe a few and how they work internally. So the smallest scalable unit in the Kubernetes world is a pod and basically this pod can run one or more containers inside. What it basically needs is that application registry, that container registry that I've shown before and so it will pull the container from the registry, place it in the pod. As I said, potentially you could run more than one containers in a pod. That's a significant difference to Cloud Foundry. You would need that in case you have really shared file system things that you have between the containers or in other words, to put it, if the two containers are not allowed to be deployed on multiple different nodes, then you basically should group them within a pod. If you don't, you should stick to a one-on-one mapping. For the sake of comparison of today, we'll stick to a one-to-one mapping of an application with a pod. Now, in order to scale the pods to multiple instances, the object that you need is being called a replica set. So a replica set will basically control the amount of your pods and the way they're being scaled. The top-level object in this diagram is a so-called deployment. A deployment is what you need when you want to update or change the behavior of your pod or of your application. Then the deployment will basically take care of multiple replica sets and make sure that transition between them works fine. Additionally, to expose your application to the end view, so you either need a service or an ingress that's an additional component that will basically expose a certain endpoint to your user. So you can also see quite a few difference in here compared to Cloud Foundry. I mean, there's more. You can configure and more, you can do more on a granular way. And also, there is nothing like the concept of a build pack or a separation between application and services. Everything is being treated as a container. It's just a different abstraction layer. In Kubernetes, you deal with the containers in Cloud Foundry, you deal with the applications. And that is a significant difference basically seen from the end user. Okay, so let me jump into the demo. I hope that the demo guides are with me and you can see this fine. So before I start, just maybe I have to speed up on one or the other side. Who of you has already pushed an application to Cloud Foundry? Okay, that's good. Who of you has already pushed an application to Kubernetes? There's a little less. So that's what I expected. I mean, I'll try to cover both, but I try to speed up a little bit on the Cloud Foundry side. So I have this very simple Spring Boot application that doesn't really do much. It has two rest endpoints exposed. There's like a hello endpoint which would just say hello world. And additionally have a piece of the host name inside. So that basically is required to see that the application is actually scaling and using different instances. And below that I have an endpoint which I call fail. And what that one basically does, it's basically triggering a kill of the JVM and causes the application to crash. And that's basically what I need to demonstrate the failure behavior of the platforms. So now, in case you can't see it, just let me know. I'll try to increase the font, but I also have to put a lot of information into the screen. So there's gonna be a bit of a challenge. We start with the Cloud Foundry side. This is a manifest, which is basically kind of a deployment descriptor for an application in Cloud Foundry. So the application is called Simple Web. Initially I wanna run that within the three instances and the most important part is basically this is the JAR file of the application that I have just built. The memory is actually an optional component, but I had to tweak this a little because I try to run that all here on my local machine because I didn't really trust the network and I wanted to exclude that. So the magic command, you know, know that is basically CF push. And that will basically try to take its course. So now it's gonna download that build pack, build that internal container and do a thing. So in the meantime, I'm gonna switch over to the Kubernetes side. And as I said, the first thing you gotta do is build your container. So I have a Docker file that I have prepared for that. What this one will do, it will take as a base image that already has an operating system layer and a JDK included. It will create a directory for my application. It will copy my application in there and then basically have a command to start the application. So I'll do the Docker build command and I'm gonna call this image simple web in the version 0.1. So in this case, now it has built my image. My image is there and we're ready to move forward. In the meantime, let's see how things look like on the Cloud Foundry side. I'm gonna split my screen here a little. So what I'm gonna do is I'm gonna do a watch command and I need to decrease the font a little here. So you can see it down here and I hope you can read it. We have three running instances of my application. So the two things that I wanted to cover with deployment and scaling, this already went pretty well. Now, in order to track that they are really there and really working, I'm gonna set up a while loop where I basically, I'm just gonna increase that a little too, where I do a curl on this hello endpoint and repeat that pretty much every second. So you can now see the application is responding and you can also see that we have like an alternating ID in front of it. That means the load balancer also kicks in and basically has some own algorithm in which he places the load on the various instances of my application. Now, what we also want to do, as I said before, is basically fail the application and see how the platform copes with that and recovers. To do that, I invoke the other endpoint this is the fail endpoint. And once I executed that, you can only see it looking here that this instance of the application went into starting mode. If you look at like my end user doesn't see anything about that. So Cloud Foundry has detected that this instance has failed. It basically took it out of the equation and it's not gonna route any traffic to that until it's basically back. So I'll try to do that a little bit more to actually show you that I'm talking the truth. You can see now from the ID that it's only one ID that the load balancer routes the traffic to because all the other ones are currently not running and as soon as they are back, like now it starts iterating over those again. Okay, so let's try to do the same thing over on Kubernetes side. I will also do a watch command here and what I'm gonna watch is all the objects that I was talking about before like the deployment, resource sets, pods and services. So doing this right now, I don't have anything except an internal service and I will now run a deployment using that Krupp CTL. I'm gonna make it a little bigger. Run command. I could also use like a deployment file in a similar way. So this is just an option. I will, maybe if I have the time, I'll show both. But so what I basically do, I'll start this one with one replica. Then I have to point to that image that I have initially built. This label is optional, but I'm just gonna add that because that might make my life easier later on. And if I hit that, you can see that a couple of things are happening above here. So I now have a pod running. I have a replica set running and I have a deployment running just in the way that I expected that. Now, if I start now to do a similar thing and run a while loop, I forgot something, yeah. It's nothing, it's responding yet because the piece I forgot is I have to expose this one as a service. So I expose this deployment called SimpleWeb as a service of type load balancer and expose that to port 8080. And as soon as I have done that, you will see that my curl will get some responses and the application works fine. So on, I'm just gonna move that a little bit. You can see now here as well, I'm also gonna decrease the font a little bit. So we have now a service on the bottom which is called SimpleWebService with a load balancer and the external IP I'm pointing to as local host on 8080. Additionally, I wanna scale this service too. Now, one of the things that I want to demonstrate here really quickly because initially I said that the replica set is actually the component in Kubernetes that controls the replicas of the application so that the error I made in the first place was I tried to scale the replica set. And if I'm gonna, no. So if I try to scale this replica set, no. It will actually tell me the replica set has scaled. Now, I don't see anything about that when I look into my objects. And the reason for that is the dependencies of the objects in Kubernetes. So I could, for example, build a pod on its own and the replica set on its own and connect to that and scale that then. But in that construction that I have with the deployment, basically the deployment overrules the replica set. And if the replica set increases the desired set, it will be basically set back by the deployment. So this is one thing you gotta know. And then basically means if you scale the deployment, this will do the trick for you. And you can see now that we basically gone up to three instances all there. On the other side, you can see here, we have a few returns that didn't go well, but as soon as all the applications are up, we can also see that by the alternating IDs now, that the behavior is just the same. So the load balancer works, it balances between the various pods running in my environment and does everything I need. Now, one thing that I wanna show here as well is failing the application, of course. So I'm gonna do a curl call to this failing backend and sometimes you can see that one of the instances, basically if it hits the one that is right, but it is pointed to right at the moment, then you don't get an empty reply. But another thing that you see is basically when the service is already coming back, you get another one of these. And what you have to configure here additionally is the so-called readiness check. The readiness check is like a monitoring mechanism that Kubernetes uses to say this application is actually ready and only if it's ready, I'm gonna route the traffic to it. And to configure that, I have to, I will copy some code out of here because it's too much to type for me now and as I'm limited on time. Now, one of the cool things that I really liked is that the way you can edit a live configuration. I don't know what that does sometimes. So, Cube CTL has this edit command and you can edit all the objects that you're basically dealing with. So I'm gonna edit the deployment object and that gives you a YAML notation, which is the same thing that you can use initially to deploy everything. And this is basically a live snapshot of your running environment. So what I'm gonna do then is I'm just gonna put this readiness probe in and then I just edit. Just have to fix the notation here. And as soon as I write to that file, the new configuration will basically kick in and what you can see here now is that it creates a new replica set. So it basically creates a new container image with a new configuration, well, not a new container image, but a new configuration object and we have like five concurrent pods at the moment. So it will do like a zero downtime deployment of that new configuration. And you can also see that from the changing ID right here. This basically has taken over and I've added the configuration live without any downtime. So what is the time? I need to hurry up a little bit. So the final thing that I want to show is then basically now if I have done that and try to fail the application again, you still have that every now and then that a single application. But the readiness, now you can see the application is running but for a moment it was not ready. So that was the change to what we've done before and can kill another one. And you can see that now. This one will probably go into like a crash loop and back off for a second, but the router will not write any traffic anymore. And so I'm pretty much have the same experience that I have shown with Cloud Foundry before and it checks for the right endpoint of the application and interprets the readiness in the right way. So what we additionally would like to do is then basically fix the code and say, okay, updating this one call this hello world, not version two and take this bad call out of the code. Now I'm gonna skip the process of building another container. I have already prepared that and have an image with version two available. So I will do the same thing, just adding the deployment and in there, in this case, I would have to specify the other version of my image. Okay, and what basically, same thing happened again. So we have like this dynamic rollout of the new image. It creates a new replica set still under the umbrella of the old deployment object. And you can see now that it basically has switched. So we're switching from version one to version two. The end user will not have any outage there. And if I do the failing call now, it will just return fixed. So this has basically patched the application. Now I'm gonna skip the steps to do this update on the Cloud Foundry site. Essentially what you do there is you push a new application and then you basically move the routes from one application to the other. So it's like a three step process, new application, map the route and un-map the route because I'm seeing that I need to look on the time and just trust me, this is gonna work. Okay, so a quick summary of what I try to show is of course I couldn't cover everything but I will show you later on. I have a write up of all those steps. So if you wanna try it yourself, you certainly can. So on the Kubernetes side, the things I really liked is that this live editing of the configuration and that this will update without any downtime. Of course it's also really powerful to have this large function scope. So you have many different objects you can configure a lot and tweak in turn on various levels in a very granular way. I also listed this as like a disadvantage because you have that really big configuration scope and that means you have to deal with it. Additionally, compared to Cloud Foundry, you have also a lot higher skill requirement because you're not only dealing with your applications but you need to know something about Docker to all like another container runtime which is supported but anyway, you have to build your container image and you need to have the skill to operate Kubernetes. Now handling of containers as I just said this has definitely an overhead. I mean, right now in this example, it was pretty straightforward but in reality it means you need a process to build your containers. You need a process basically to bring your containers into a container registry and then handle your images, patch your images and so on. Now on the Cloud Foundry side, the big advantage is like the simplicity. Also, I put that in hyphens, it acts container-less, so the beauty of it is what I see that it has the power of containers. It uses the containers initially but it kind of shields the end user from the disadvantages of all the container handling. It is very fast to deploy so I think if you have an application that depends on a build pack which exists and connects to a backend that you have in your services, it's difficult to find a way that how can you deploy an application to the platform quicker. The downsides here are also the build packs. I mean, the build packs are very powerful but if you happen to come to a code base that there's no supporting build pack then that advantage quickly goes away. If you look at it from a Kubernetes perspective it's definitely a limited configuration scope. The blue-green deployment is something that works in Cloud Foundry but it's not in the same automated way as it does in Kubernetes. I mean, I don't know if you've seen that keynote this morning with the CF better push. So, okay, this point is basically not valid anymore but I didn't know about that until then. And so stateful workloads is something that I haven't brought up here as an example but that's basically the thing that has been talked about a lot before that which is basically definitely a reason why you would look at Kubernetes. So now to wrap things up, if you look to Kubernetes from a Cloud Foundry perspective it's, I would summarize in a way, it's basically you can do and you can configure more but in turn you also have to do and no more to do the things right. If you look at the other way if you come from Kubernetes look at Cloud Foundry you might wanna say it's very easy to handle but does it really give me all I need? And this is basically the assessment that you have to do if you evaluate between the two platforms. And now in the end, I mean, my wish that would make my life as a consultant easier then is something with a functional scope of Kubernetes and the simplicity of Cloud Foundry. And I've just been over to Jules Friedman's talk about one attempt to integrate them. I know there are many. And I think the general tendency is going in the right direction and I'm definitely looking forward to see what that brings. Now, basically last slide. So if you wanna reach out to me if you have any more questions feel free to ping me on Twitter. I don't really, I don't tweet too much but I respond to the messages. Now, one thing I wanted to show you on my GitHub account, there is I have the repository pinned right here. I called it CF versus Kate. Now, in there, I have detailed instructions of basically all the steps that I did today. So if we look for example into that Kubernetes part I describe the steps to set up the watch, run the image and I have a lot more alternatives built in there. So if you wanna repeat that, if things went a little bit too fast today which I can understand, then you can look into that and try to repeat it yourself. I mean, I have an explanation for building a service, for dockerizing it, for playing with Kubernetes and Cloud Foundry. And however, it does not explain how to get to the platform. This is the part that you have to do yourself. Now, and yeah, again for the meetup whenever you come to, if everybody comes to lovely Stuttgart feel free to ping me and I will tell you if something's going on. If you have an interesting story to share, I'll be happy to set a meetup up with you and place you as a speaker then. So with that, I would say thank you very much. I'm gonna release you all for lunch now. In case you still have any questions, come and see me. Thank you.