 Okay, good morning, and welcome to my talk today. Yeah, good morning or good afternoon, good evening. I don't really know from where exactly or which time, so you are dieting in right now. I am recording the talk at least at 11 o'clock in the morning. So I think that's about the same time as it's going to be done in the conference. If you have any questions, I think there's a chat where you can do that, where I can read them so I can answer them either on the fly or at the end of the session. So please make sure you use that. Now I'm going to start sharing my screen. Okay, I hope you can all see this presentation now. This is the open source summit, and that's the reason why I want to talk about open source platforms. Cloud platforms to be precise. We're going to have a look into what Kubernetes Cloud Foundry and Knative can do for you. What else is out there? What you can use. I probably have the most detailed view on those three because I have most experience with them. There are definitely more, but I'm not going to have a lot more time to cover all that because there's only 40 to 45 minutes. Now, before I dive into that, I will probably say a few words on who I am and what I do, how I basically came to give this talk. Also then switching more into the focus of this topic. Where did all this come from? Look at the history, how things evolved, what basically influenced other things. And then also dive into the look what basically is a platform. What would I think of, say the turn platform, what are the things people could expect to get as benefits out of it and in turn have to invest basically to deal with it. And then on the technology side, as I said, Kubernetes being the most popular and prominent version of Cloud Platforms nowadays will also relate that to Cloud Foundry and Knative. If the time allows to go a little bit into a demo and then try to wrap things up. Now, as I said a few words about myself, this was a bit of an older version when my hair was a little bit shorter, not younger. And so in my basically daily job, I advise and help clients on their way towards the cloud. I will see a few things more about that soon. On the open source side of things, I am an organizer of a user group or a meetup, so to say, in the area of Stuttgart. Now, of course, Stuttgart is quite a few kilometers away from Japan, but as in those times, most things are being done remotely anyway, you of course also very much invited to join. If the time zone doesn't really match, we still have those sessions recorded and you can find them on YouTube. If you have any questions about that or about myself or about anything that comes to your mind and you don't want to write it into the chat right now, you can always reach out to me on Twitter. That's my handle below, so feel free also to use this possibility. Okay, so reason why I came to this talk is basically a result of my day job where I help or at least I try to help people from their move to from the old world to the new world, so to say. I mean, I've been, I also help people starting with cloud if they don't have an old world, but a lot of people do. And so, deciding on which environment to run your workloads is probably one of the most important musician in there. Also, what kind of skill will the people need to work on that. So it's a choice of technologies providers, education, and of course, making all those things secure. So, this is where I'm in, in various forms, this could be like on a strategic high level away, or educated educating people with classes, or really hands on implementation and or migration of applications towards cloud. Now, looking at that. So this is a bit of a timeline on in in history from things relating to what we today call cloud platforms. Now, an interesting fact is, it already started very early so in 1979 is the first thing on that scale. Now, with Jane truth alone, I wouldn't probably argue a cloud platform was possible, but as a fundamental concept for those platforms, I would probably argue that the container technology is probably the most important thing to run the cloud native workloads. And Docker is the most prominent or popular technology in that space and I think most people have heard of that by now. What's interesting is Docker only came to exist existence in 2013, whereas the technologies to build such a container were already finished in 2007. So, it really took quite a while until the technology got adopted, and even more interesting they were already technologies before Docker that did pretty much a similar thing. For example, Linux container came out in 2008. And nobody if I do that talk live and I ask people if you know Docker or if you know Linux containers the answer is pretty clear I mean the few people that have heard of Linux containers. But I don't know a lot of people that really use them or still use them in production. So but still, maybe it wasn't the right time for the technology back then. In 2011, two technologies came out with cloud foundry and open shift to with the aspect with the focus of like something like a platform as a service to run to orchestrate to scale to fail over your cloud native applications. And this is pretty interesting because they did a lot of things that Kubernetes does today without using Kubernetes and without using Docker. So that has changed over the time, because the adoption of Kubernetes is is just so so big. So that's something we're going to look at like how did this technology start, and now how have they transformed under under influence of of Kubernetes. We'll have a look at Istio k native, and then the latest cloud foundry developments and I'm going to talk on all of them but it's basically the cloud foundry for quit as part which we're going to have a look into in the live demo, if, if this works out. Now, they can see, there was quite a bit of history and that also leaves the people today in the in the decision so what could be the right one for me and how can I approach that. Now, if you this is, I could also do that live but it's in the screen will just work as well. So the three technologies into a Google Trends search the result is very clear. I mean, Kubernetes dominates this space by far I mean this I don't think this is only to the space where people Google for that search term, it's it also reflects fairly the adoption. So, communities came to the game fairly later but it really took the work by the best on and even if you take communities out, you can see that can it if well can it came even later it was building communities in 2018 cloud foundry is doing kind of a steady line there. The question was to me why why do people. Why is that right. Why is the one so popular yet or not. What can do better or worse and the other things I would like to reflect here a little bit. Now, I personally started with this journey. In the days of cloud foundry before Kubernetes and looking at things as I said from a past perspective like platform as a service. You don't really worry too much about those containers. The focus you're more on an on an application level. So, you push the applications you bind them to services you use routes for the connection to the end user. So all those, this concept of fairly easy to grasp and understand they don't really have all that much configuration possibilities and scope, but they're very much tailored and focused for a certain for a certain use case. Now, if you start when I started with Kubernetes that there were a lot of more things, as you can see here, and, and to be fair, this is not like a one to one comparison. What so I don't want to say you need all the things here that you need to do the same thing with like with two artifacts before and cloud foundry but what I would say is you should understand most of them in order to to figure out what is the right way for me to do it because Kubernetes has way more options and configuration scope, which can be very powerful, but which makes it also sometimes difficult to learn and understand. Even though, especially if you look at all the things here, you wouldn't even find a part which is called application. And the reason is that Kubernetes doesn't really operate on that way on that level Kubernetes operates on a level of containers. And you see a container is also a Docker image here. This is like the abstraction level that the platform speaks. So this is also already one of the fundamental differences there. Nevertheless, this is how the way how things look. In this context, I found a pretty funny tweet from Aaron Gupta, who is like I think a Docker captain and a Java champion and very active in the community space as well. So he I think he understands really well what he's talking about. And saying everybody doing Kubernetes seems to be not happy with it and everybody who's not doing when it is seems to be craving for it. And it's probably not that way that people are not happy with Kubernetes. I don't agree with that because I think this can make you very happy. But there's a certain truth in that that there is there's a big attraction or hype towards the technology. And as soon people get to start working with it, a lot of people realize it's not as easy as they have thought in the first place. And so that also brings in a high level, a high degree of complaints. And starting from here, we will see where we're going to go. Now, coming to the term platform. So what does that mean I would try to to illustrate that on a bit of a diagram, starting from a perspective of a developer. And also figure out what are the additional roles that platform describes. So if the developer has the intention to build and write and build codes and run it. So some, some users can can benefit or exploit it. The first step would probably be to put it into like a source code repo to do a build step and get a runnable artifact out of that. And those things are more normally done with CI and CD pipelines with a high degree of automation if people do that right. And on the other side, a lot of code will probably not work as with the code alone. They will depend on legacy systems like databases messaging or any other third party tier applications. So there's the second rule coming in the people to administer those those pieces. Now to for the application to talk to the database, it most likely needs some configuration or credentials to actually get there. In the end, those things will kind of be baked into a container of at least be provided to a container to consume that. And then it needs to run somewhere. Now, all of that is probably what I would say see as a as a platform and there there's the third role, which is called the so called the administrator to run these things in an in an automated fashion. And we can see a lot of technologies down there that provide things like that. And I also would like to mention that there's a bit of a fourth role, because depending on what level you and what abstraction level you enter that form. It's not always clear who owns which responsibilities so who owns. I think it's pretty obvious that the developer owns the coding part, but who owns the build part who is basically building the framework to build the container update build the application right. And then next step would also be who owns the, the part of building the container. And that includes like who is providing the base image for that container. And who is about to patch that image if things go wrong. So this role doesn't always come to existence. There is visibility very quickly, but if something goes wrong, it's suddenly there and it also needs some kind of clarification. Yeah, who is now in the position to patch such a corrupt base image in case if I have for example a heart bleed or any similar vulnerability that only appeared at a certain point in time when the code was already running in production. And so a platform to to wrap things up is basically missing a slide here is something that will help all those included roles. And most likely you will have all that four roles, even if it's like in one or two persons but you have all that roles. A platform to responsibility is to make those things easy to people that consume it. So the less people have to do with recurring tasks that the more able they are to focus on things that are really important and make a difference for the consumers. Now, a couple of ways to look at that. I'm just going to move my image here around little. So most of you will be familiar with like the day zero to day three operations. And where the day one and day two are in the context we talk about the important ones to looking at what's the platform. This is basically what a code is built what a code is packaged where it contains containerized and deployed to the application and then add to the platform and then and they to the platform taking over with all the power that it has to like scale recover from failure and patch dynamically providing observation possibilities and so on. So the difference is really being being made here to say these are the capabilities that I need. So this is the right platform for me. Another one to look at is, which is the abstraction layer that my platform would work with. Is it's going to need only containers. Does it accept also looking source code in forms of either applications, or even go further and go into the world of what's called serverless or function as a service where event driven architectures work and you have only very short living applications in the pits that are being executed very often so there's a lot of distinction to be made. And just to take this away up front there there is no best platform to solve all the problems. I mean that it's always I told you I'm a consultant so the answer will always be it depends and you can see quite a few factors here what it really depends on. Now, as I said before that platform will help to make it easy for the roles involved to do the right thing. And starting from here we going going to look at the various platform technologies. So Kubernetes that the one that everybody seems to want to have or seems at least seems to Google about is actually coming from the company Google Google has an internal container platform that it uses for the majority of their of their workloads, which uses the very similar container and orchestration concept. And this was never open sourced in that in that way. But from a technology perspective, it goes into the same very much in the same direction, and Google decided to basically put that into a open source format and give it out to the world. And to basically come up with something which is not opinionated very open and extensible. So, a very virtual ground for a lot of new open source projects and possibilities. It's the major project of the CNCF. As I said before it has given ground to many, many new developments, which influence the way we run our workloads today. So, if we're going to do the same thing as before and say, let's start from a development perspective and say I have an app, and I want consumers to get to this app on a runtime. Then the first thing that needs to be done is actually doesn't really touch Kubernetes, because you have to build as I said is container. The Docker file is probably the most prominent way to do that. However, there are alternatives. There is, for example, a company called chip from Google. If you have a Java application, there is also the build packs technology that I come to speak of later. But let's just assume you need some mechanism to get your base container image, a runtime for whatever your application is about, and then builds your image. Put the application on top and you have a ready container image with your application. You need to push this to a container repo or a registry. And that's the way that all the initial and required steps need to do before you can actually do Kubernetes. Now, in Kubernetes, this is always a bit difficult to visualize. I decided to use the term kubectl create to create a so-called deployment. What that means, it will first of all reference the image from the registry that you have pushed before. This is without that it can't do anything. It will not be able to see an app or something like that. And it creates a construct which is called a pod. And the pod means one to many containers in technical terms. In biological terms, it is a group of whales and you probably have seen that docker whale icon before. So it already implies from that point, multiple containers can run in such a pod. I symbolized this with a second container in here. Over that, the umbrella object of that is a so-called replica set. And as the name said, this is about the replicas of such a pod. Important to know here is that even though your hand can have multiple containers within a pod, Kubernetes can only handle pods. So this is the smallest unit of deployment that Kubernetes can address. So it would be able to say, I want this container to be running with three instances inside of the pod and this only with one. So they either start and stop together. And that's it. On top of this replica set, there is a so-called deployment object which kind of maintains the versions of those replicas. So whenever you have a new container and you have patched your image, then you can do that. And you will be able to expose your endpoint to an end user. Now, looking at that, that already contains a lot of things that you can do. So you can scale on basis of your applications in the containers. You can dynamically switch over from one version to the other. You have possibilities to expose that workload to the end user. And what you can't see here yet is basically you are able to recover if something fails. So, yeah, the value you can use to see is it's a runtime for container workloads. If things crash, they will bring it back. It can manually and automatically scale the pods. It can do rolling updates for the new version. And you can easily extend it and put it up wherever you want. As it is coming from a bulletproof implementation of internal Google workloads, it has a high degree of robustness and stability. And you can get it pretty much everywhere. So either on your local machines or you can get to the major cloud providers. Most of them will have a Kubernetes service. So it's a really easy way to get started there. Looking at the problems, so to say. I mean, Kubernetes in itself is agnostic pretty for what is in the container. You can configure it to like monitor and observe that. But in the default state, Kubernetes will just treat each container in the same way. Another problem that I see very often is getting to understand and apply all that correctly. It takes a steep learning curve. The extensibility and configurability is of course an advantage, but it can also be a disadvantage. In case it's just losing things out of sight because it can get somewhat confusing. YAML is always a point about a lot of discussions. So either you like it or you don't. You most likely will have to deal with it when working with Kubernetes. And to look at which workloads is the right one, then it's definitely going to be container images because that's what it really deals with. Now, looking at that, that means it will divide those responsibility into a couple of segments. And only after you register the image, Kubernetes will be able to take over. So day zero and day one is basically not really a Kubernetes thing. Kubernetes will really unfold its power on the day two side. I mean, you could of course argue that deployment is also a step in there. That is of course true. But it doesn't interfere with the building of the container process. Now, look at Cloud Foundry. And as I said before, Cloud Foundry was already around in the time in 2008 with a strong focus to be very simple for development. And there is, if I only could explain Cloud Foundry in one sentence, it's probably really the hike below where it says, here's my source code running on the cloud for me. I don't care how, which already goes a long way and shows the perspective. It's really, I don't want to worry about everything what is below my source code. And I want developers to be able to quickly build, test, and deploy and scale their applications. So the biggest contrast is different flying height basically towards looking at Kubernetes and also very opinionated. So it's by far not as configurable and extensible and so on. It was really designed as a way to take it, this works that way and it should be fine for you. Now, looking at the same thing in Cloud Foundry terms is you would access an application code directly. So you would say CF pushed to an application and then this application would go into the platform. So basically there's an invisible line here. I hope you can see my mouse that where the user interacts with and doesn't interact with. So the user in this case would only submit the code. Everything that comes into the hand of containers will be will be handled within the platform. Cloud Foundry will detect what would be the right base image what would be the right runtime which is called build pack here to run that application created image for you put into a registry and run it. So it will eliminate those steps and also take it out of the configuration of scope. Now the other important thing our command is to do the such a fine service call, which will basically get the configuration of back end services and inject them into this application container. Also, to make things simpler in terms of handing the back end services from a developer perspective. Of course there needs an operator that provides that services. But there is also an API to work with this. Important to say here is this distinction between applications on services this basically ties back to the 12th factor app manifest, which kind of declares or recommends I don't know how to say a separation of state less applications and stateful services. This is trying to be enforced here also with the names of the services all the stateful components like database messaging everything state affected and the applications should be stateless code. Otherwise, the automation within the platform. Will we would not know it and would treat the code in the wrong way I mean this of course applies in the same way for Kubernetes. If you have in memory state and you scale your application you might have very unwanted effects. Now, in the end it will bring back a route to the end user so similar to a service and or an ingress in in Kubernetes. This is also being done manually here automatically here. So, the biggest difference you can see here is the day. The responsibility of the platform goes further to the left, because this one is really involved in build packaging and containerizing of the application. Whereas the, the part on the left will will can have a bit better focus on providing the source code of the application. Now. So, having a do a value statement in here under the from a workload handling perspective. It does very simple thing as Kubernetes does basically with a new version it does exactly was Kubernetes does because it's running on a Kubernetes infrastructure now. The focus really is simplicity and the application awareness. So, you work on a different level, and can kind of things from from that to that from that perspective. Now on the problem side I would say. Yeah, it's. I don't think people know about that so much. Same will be a thing with K native later on, and also has this limitation of, of the build packs coverage so in Kubernetes you can run pretty much everything that you can run in a container. In cloud foundry you can only build those things where an application is there. I mean this is definitely the case for most common programming languages. But if you outsource this, this runs a bit responsibility platform this is of course something you expect to work. And this of course could be a thing where you might have limitations. Now, in there you can say, you can run apps and containers. So, the focus however is definitely doing things in an application level. So if you use cloud foundry and only push container images, and you're basically missing out the core part of the value. All right. Finally there is K native, which is an as the name says native to Kubernetes. So this was built. When Kubernetes already existed, and had significant influence from previous platform as a service concepts like out of open shift or cloud foundry. It also tried to put in the functionality of running functions or being a platform for running function workloads that start very quickly and also terminate very quickly. To get a better utilization out of your environment without having idling components. If this one needs Kubernetes and Istio, Istio being a service mesh, so that it actually can have insights of the of the traffic within the cluster. In order to react on that traffic, it also needs to know about that traffic and say if there's a lot of workloads coming in, they need to scale up. If there's no workloads coming in at all, I need to scale down even scale down to zero so there are no instances left. And yeah, sum it up to focus on simplifying the Kubernetes experience on one side and also provide service capabilities. Now to be fair, kind of provides more than what I'm showing here today. So the there's a serving component, which can be seen as an equivalent, so to say, in functionality to what we have seen on cloud foundry and Kubernetes site. But there's also an additional eventing component, as the name says, that takes care of event driven application architectures and integrates with serving here very well to support the eventing mechanisms. There's also tecton as a build and deploy component, however, this was initially a part of Knative but now got outsourced as a separate project. So of course it still works with it, but it's not a really an official Knative mandatory component anymore. Now to get to your Knative running application. You basically, sorry for that, you basically to repeat all the same steps that you have done in Kubernetes. Even though Knative has the ability to deal with functions and deal with the functions properly, still those functions need to be provided in the form of a container. And so, Knative will not do the packaging of functions into container images. But it can deal with images containing functions in a very good way. So, again, the responsibility of building that image and that part of the action is there. However, this is something that can, for example, be addressed with the tecton component. Now once it is running, it has a very simple to use API. It says the top-level object is so-called KN service. Now this makes things a little complicated as because we have services in Kubernetes and we have services in Cloud Foundry and all of them mean a little different thing. So here in Knative, the service is actually the top-level component of running such a workload item. In this case, yeah, you provide an image, then the service object will be instantiated. This will, in turn, create a configuration and a route object. And the configuration object will have potentially multiple revisions. So as I said before in Kubernetes, you can have deployed multiple replica sets and one will basically replace the other one with a graceful failover, so to say, or like rolling out the new one without giving an outage or downtime to the end user and the consumers. Knative takes this a step further and with revisions, you are able to run multiple of those revisions, basically multiple different versions of your application and the route will decide how much percentage you would get on either side or configuration, how the traffic is being routed to those revisions. There is the concept of like canary deployments where you would like to say, maybe I want to roll out that new version only for a group of tests or a certain region or a certain percentage of users, then you could do exactly that. And see if you realize it's not working the way it should, then you could simply roll it back or change the percentage back to the one which is still in there and is working well. Now, to put this into a perspective, there are also as route image revision and service fairly minimal concept of things that you need to deal with in order to use a Knative. So it also say from a learning curve or from a developer simplicity kind of perspective, there is an improvement there. Now, looking at this, of course you need to do those steps in advance, but on day two, Knative brings significant advantages. So, yeah, it runs function as a service workloads, it can scale to zero. So it of course can also auto scale to a much higher amount if the workload suddenly increases. You can auto scale in Kubernetes as well, but out of the box the metrics to trigger that are only CPU and memory basically the mess that the the metrics the container can can can detect. In here, the metrics that can be detected is also the traffic on the wire so if there are sort of things coming through, it will scale up and the weighted routing to say I want to have this and that percentage of the workloads. So, again, the adoption and popularity is a is a problem. Late integration right here this must be a copy and paste arrow. I mean, you could say it should have been there earlier but this wasn't really possible because it used it it required Istio so I'm going to just correct that this one is so sorry for this. We can jump back in here. And I took this error out. Important thing is on a workload perspective you can. It has containers applications and functions. So, it is able to provides mechanisms to build that. And they should also all come in a container scope, but it has the ability to look into them quite a bit better. Why am I summarizing all this to say Kubernetes will probably come along and provide the the the container platforms technical capabilities so you can crash your workloads could is will bring it back you can scale them up in case if you need more instances and more more power or load balancing. And so everything you would expect or such a platform is something that we need is provides in a very very well and robust way. So, the cool things with cloud foundry and can you freely is it's they're not replacing it. So it's not like an either or and say well if I want to do cloud foundry of it okay if I cannot do Kubernetes, because they are running on the same platform. So they can just be added and installed on top of a running Kubernetes environment. This will give you the possibility to say, Well, I can try I can now push code, and I can push containers. And if I, and in the end they are going to be running in pots on the on the same platform. If. And so cloud foundry can be seen as just a layer on top that provides some easier abstraction for developers. So better developer experience and the to me the most important factor so it's. It's totally free of containers. Now K native adds on a different way so it brings it extends the possibilities to scale zero to do those revisions to scale high automatically and do percentage routing. So, now, with that, I will probably stop this one right here because I'm seeing I'm running out of time. I probably would like to jump in here is, I would encourage, I would like to encourage people to try all that because as I said before, Kubernetes has many, many different offerings. There's a mini cube. There is a kind there is a Docker desktop, all on a local environment so you can run them on your machine and just like start using them. There are a lot of options for public services, Azure, Amazon, Google IBM and many, many more. Most of them have like a free tier or so where you can start using them and just play around with it. Now, for the technologies I've shown. The K native and the various cloud foundry options. There are a lot of tutorials out there which makes it really easy just to start with it and try it out. And, and then it doesn't come with a very high cost or but with a very high effort. If you haven't tried to do Kubernetes anyway, why not put one of the other two on top and see if this doesn't bring any bring any benefit for you. Now, unfortunately, I'm realizing it is already 45 minutes and probably over time here. So I actually I did prepare a demo. And I'm kind of unfortunate that I couldn't go in there, but I thought the other pieces were definitely important as well. If you are interested in that I have given the talk before in a longer version also with demos. You will be able to find it here on the on this playlist or reach out to me. And I can definitely link that one once this video gets published on YouTube. I will link the others. But, but for today it probably wouldn't make sense if I step into a demo for one or two minutes. So apologies for that. I hope you still enjoyed the talk. And as I said before, if you have more interested in those in those topics. Feel free to to reach out to me. And with that. Thank you and opens up for questions.