 We wanted to introduce ourselves but thank you for taking over that part. It's great, actually, we save more time. Yeah, so we both work on a project called Irini and this is a thing that we want to talk about today. The topic is Irini versus Diego or Irini or Diego, how to choose the best-fitting container scheduler. So, I think I need to turn it on first before it works. There we go. Yes, we love colors. We'll talk about why container orchestration at all. We'll talk about what options you have today with Cloud Foundry and what options you have with Irini. We'll do a comparison between Cloud Foundry and Kubernetes and Kubernetes and Diego and finally we will go to the use cases and tell you about when to use Diego and when to use Irini. But before we start to talk about all that stuff, let's talk about lunchboxes. So, this is a lunchbox and it's not any lunchbox. It's my son's lunchbox and I think lunchboxes are really cool because I can take my son's favorite food and put it in there. I can close it and I have a small portable lightweight thing that isolates my food and I can prepare the food in the evening and I can put it in the fridge where it's isolated from all the other foods, right? That's really cool. And in the morning when he goes to school, I can take it from the fridge and put it in the back and it's still isolated. Awesome. And from there, my son can take it to school. He can take it to the playground. He can take it to the beach. Basically, he can take it to every place in the world and have lunch. That's cool. That's the cool thing about lunchboxes. But why do I talk about lunchboxes? Well, I think that the characteristics can be easily applied to containers that we run in the clouds, right? They're lightweight. They provide isolation. They're portable. And just as you package your lunch into a lunchbox, you can package your app into a container with the runtime and everything that you need to actually run the app, right? So, and when you have an image, you can actually run that container on your personal machine. You can run it on Kubernetes. You can run it on Docker Swarm or any other container orchestrator on the market. And this is where you have to make a decision. Which container orchestrator do you want to use to run your container? And it's not that easy like a lunchbox where you just decide, I want to eat it in the park. I mean, here, this is computers. And computers are hard. And you have to make a decision. And it is a hard decision because container orchestrators are different in their architecture, in their operational model, in their tooling. They have different benefits, different downsides, depending on what container orchestrator you want to choose, right? But at the end, they do all the same thing, right? They're just two container scheduling, right? And as a developer, especially on Cloud Foundry, I don't care about the scheduling. I just want that UX. I just want to push my app into the cloud. I don't care what the Cloud Foundry does and where it runs. But who cares here if the developer doesn't? It's obviously the operator. The operator cares about what container scheduler do I use. And of course, he needs to make decisions. Do I have the right skill set for the scheduler that I need? Which kind of complexity or do I need a lightweight container scheduler or orchestrator? Such decisions he needs to make when he chooses a container orchestrator. But today, with Cloud Foundry, you have only Diego, right? So wouldn't it be nice that Cloud Foundry has like a plug-in point where you just can say like, I will just use another container orchestrator than Diego, something like an API where you can say, yeah, I want Diego or no, I want Kubernetes or potentially any other container orchestrator. And we just call it OPI. It's orchestrator, provider interface, such like Bosch has a CPI, which is cloud provider interface. And yes, this is exactly what Irini is. Irini makes the container orchestrator for Cloud Foundry swappable and today it comes with a Kubernetes back-end. And at this point, I don't want to go much more in detail about Irini. There were other talks and, yeah, we don't need to speak more about this, but I think for operators, it's important to see how the architecture of Cloud Foundry itself looks like if you use Irini. So this is how Cloud Foundry looks like today. And we're interested in these components, which are the Diego components in sync, auctioneer, sales, garden, BBS. So how does this looks like with Irini? It's like this. So you have Irini, which is the bridge between Cloud Foundry and Kubernetes. It keeps everything in sync. And you have the bit service. It's the master of the blobs. It has the droplets, and it gives you a container representation of your application. And Irini makes use of that. It tells Kubernetes where he gets the image from. And Kubernetes replaces all the Diego bits. Great. So why Kubernetes? So Kubernetes has its origins in the Google bar and Omega scheduler. They're not open source schedulers, but there are many papers out there. And you can read that they're around for as many, many years, like I think almost 20 years. And actually, all the experience of those two schedulers flowed into Kubernetes. And I think that's probably a good reason why Kubernetes is such a good orchestrator. It has one of the biggest open source communities. It works on any cloud. Even Amazon has it, right? So you can basically run it everywhere. And it's kind of becoming a standard. Or it is probably already the standard. All right. So what is Kubernetes exactly? Is it just an orchestrator? No, there's more to it. Is it a platform? Kind of. It is a platform platform. It is a platform to build platforms. In case the high tier just recently treated that Kubernetes is for people building platforms, if you're a developer building your own platform like Cloud Foundry, right? Then Kubernetes is for you. And that's exactly what we do with projects like CF containerization and Irini. And now you have a rough idea where to put Kubernetes and where to put Cloud Foundry. It's not next to each other. It's like you should put Cloud Foundry on top of Kubernetes. And to make this more clear, Georgi will now do the comparisons. All right. So despite that call by Kelsey, there has still been quite a lot of comparisons between Cloud Foundry and Kubernetes in the last couple of years. And even though they have many things in common, it still feels like the comparison is not very fair. Because on one hand you have a fully fledged pass in the case of Cloud Foundry. And on the other hand, you have Kubernetes, which is somewhere in between the IS and past years. In fact, if you take Diego out of Cloud Foundry, you'll find out that Kubernetes shares more things with Diego than with the rest of Cloud Foundry. And that makes sense, right? Because now you're comparing two things that have a common end goal to manage the life cycles of your containers. Now the CF versus Kubernetes comparisons are still not without some merits because mainly because Kubernetes has so many features and some of these features are quite common to past offerings like Cloud Foundry. For example, they both provide different methods for load balancing, they both have namespacing capabilities, and they have different authentication strategies. But fundamentally, for both of them, the platform abstraction is placed at a different level. For Cloud Foundry, the platform abstraction is placed at the application level, which means that Cloud Foundry only cares about building and deploying fully configured applications. While with Kubernetes, the platform abstraction is placed at the container level, which is much lower level than the application one. And another big difference, and this is probably the difference that some of you have found out if you favor try using Cloud Foundry or Kubernetes, and this is the user experience. Cloud Foundry makes it extremely easy for you to just focus on your code and just push to the cloud without having to care about how it's happening on the back end. With Kubernetes, this is not the case. Even if you want to push the simplest of apps in Kubernetes cluster, you still have to be aware about the inner workings of the platform, and this is usually not something that you want to do if you just want to focus on your code. But anyway, this is not actually the comparison we're interested in right now. We're much more interested in the other intersection, and that's between Kubernetes and Cloud Foundry's own container scheduler, which is Diego. And they actually have quite a lot of things in common, so we'll start with the similarities. The first one is the architecture. They both share quite similar architecture. They implement the so-called advanced monolithic scheduling model, which basically means that the cluster state is stored in a centralized store, and you have a control plane of the coupled components which act on it. And one of these components is known as the scheduler, and the scheduler responds to cluster changes and basically tries to correct the discrepancies between the actual states of the object in the cluster as the desired ones. And it accesses the cluster state through a central API, and the reason for having a central API here is because of consistency reasons, and because it simplifies the whole architecture. The second big similarity is that they both try to solve the bin packing problem, which means that both Diego and Kubernetes are smart enough to know where to place your container in order to utilize the system resources as best as possible and without sacrificing availability in the process, of course. And the third similarity is that they have self-healing capabilities, which means that if your app crashes for some reason, it will get automatically restarted for you, and they can both scale your app up or down for you based on, in order to balance incoming load or just to suit demand. So as you can see, they both have, they both share core functionalities that you would expect from a container scheduler, but when you start going beyond the core functionalities, you start seeing some differences, and this is actually the interesting part. And the first big difference is that Diego is tailored to suit the exact Cloud Foundry needs, which means that Diego doesn't have anything more than Cloud Foundry requires it to, and it also means that because of this pact between Cloud Foundry and Diego, Diego can make assumptions and optimizations, and that's good because that way it reduces complexity and pain, and the key assumptions that Diego make, one of the key assumptions is that they're always running two factor apps. On the other hand, Kubernetes was designed to be as generic, as general purpose and generic as possible, and it doesn't care about the apps you're running. Whatever you provide it to, it will run it. It doesn't matter if it's 12-factor or stateful or whatever, and this is especially interesting for some people that are interested in running apps that are not strictly 12-factor. The other big differences in the flexibility of the scheduling model, and Kubernetes has the notion of affinity, pains, and tolerations, which allow you to say which node should schedule your pods, or alternatively you can say for one node, which pods it shouldn't schedule at any cost. Diego has the notion of isolation segments, and isolation segments can allow you to run apps on specific computer resources, but it's not as flexible as the Kubernetes way, and it also requires some additional operator setup in advance. Third big difference is in the Windows support. There has been some progress in the Kubernetes community about adding Windows containers, but it's still quite limited. There's still quite a lot of core Kubernetes resources that don't work very well with Windows containers. On the other hand, Diego since it uses garden containers, garden containers have much fuller and richer support for Windows. Last but not least, Kubernetes is huge. This is just a small portion of the things you can interact with in Kubernetes, and you can even define custom resources, so the list just goes on. This can be a good or a bad thing, depending on how you look at it. It could be a good thing if you won't have all this flexibility to deploy your app however you want, or it could be a bad thing because now you have all these new things to worry about, all these new things to manage and operate. We've gone through some of the similarities and differences, and we saw that they have strengths in some fields and weaknesses in other fields. I guess the logical question is which one is it, which one should you choose and which one is better. In the words of Morpheus himself, should you choose the Diego pill, wake up in your bed tomorrow and just continue living your life, or choose the Kubernetes pill, enter Wonderland and discover how deep the rabbit hole goes. For me, the answer here is pretty obvious, and it obviously depends, and it depends on your use case. Let's go to the use case. All right. Let's start with the Diego use cases. When should I use Diego? For that, let's first take a look at, again, the Cloud Foundry architecture today, and I hope you recognize this red thing here, that it's deployed on Bosch. It's important because for later use cases. So Diego, first use case. I only deploy 12 vector apps. I use services. Why should I get a Kubernetes beast? I'm happy. Don't use other scheduler than Diego. It's tailored for that use case. Cloud Foundry is made for that, so you're fine. Next use case is I want Bosch. I love Bosch, and I want to use Bosch continuously, so yeah, I don't want to use Kubernetes. The next thing goes hand in hand with that one. It's I have no Kubernetes skills. Even if I want a Kubernetes, I don't have the skills. I have Bosch skills, so I stick with Diego. Next use case. Windows containers. If you want to run them, probably a better choice. Diego. Cool. So use cases for Irini. When do I use Irini? We thought about use cases for Irini, and we came up with a skill matrix, and it goes like this. If you have skill in Bosch, but not in Diego, but in Kubernetes, or you have Bosch skills, and Diego skills, and Kubernetes skills, or you don't have Bosch skills, then you have no Diego skills, but you have Kubernetes skills, or you don't have any skills at all, right? So yeah, it's confusing, I know, and I promise this will be much clearer now when we go through each of those. All right, let's start with the first one. It's not supported yet, but people are working on it. What is it? Yeah, skills in Bosch and Kubernetes, but not in Diego. That means you're probably like an operator for Bosch and Kubernetes, you run CFAR and CFCR, and you don't have any Diego skills, and probably you have something like this. You have your Bosch-deployed Cloud Foundry, you have another Kubernetes around, you deploy all your 12-factor apps with Cloud Foundry, everything else, you use Diego, Kubernetes, sorry, and yeah, now you have like two, you have two schedules, right? You have the Diego bits, you have Kubernetes, so why not doing this? Usually, really, and reuse the existing Kubernetes cluster that you have, and schedule all your apps to Kubernetes. All right, this has some benefits. You have a unified orchestrator and a reduced overall footprint because you get rid of all the Diego bits. Great. Downsides, still, like before, you have two different technologies, two different operational models, two different communities that you have to worry about, so it's not the optimal reason, but still, if you have a Kubernetes and a CF, it would be preferable if you could use, you really need to schedule all your apps into the Kubernetes cluster. Cool. The next use case will take... All right, so for the next use case, you have SKUs in everything. You can operate both all of the Bosch and Diego in Kubernetes, and you probably don't want to do that because that's just too much of work and overhead to do it, and the best thing to do here is just operate one of these things. So in this case, this one thing will be Kubernetes, so let's see how we can do that. First, about Bosch, there's a pretty nice project called CF Containerization, and it allows you to take existing Bosch releases and convert them to Docker images and Helm charts, and then you can deploy these Docker images and Helm charts to a Kubernetes cluster. So what this does is basically packages the whole Cloud Foundry application runtime as containers instead of virtual machines. On the left, we have the Cloud Foundry as we know it today, and on the right, we have Containerize Cloud Foundry, and as you can see, all the Cloud Foundry components are actually absolutely the same. Nothing changes there. The only thing that changes is how you deploy them. On the left, you deploy them with Bosch as virtual machines, and on the right, you deploy them with Kubernetes as containers. And this has some pretty nice benefits. You again get smaller memory footprint because containers are much more lightweight than virtual machines, and you still keep the same developer experience. As a developer, when you see a push to Containerize Cloud Foundry, you still get the same CF push. You don't see a difference, and that's how it should be. And also, if you're already a Kubernetes operator, then this will be good news for you because now you have one less thing that you have to learn, and you can reuse your existing skills in Kubernetes in order to operate this Cloud Foundry. It's not all perfect, though. We still have Diego, and that means that you still have two schedulers to maintain, both Diego and Kubernetes. And also, when you see a push, since it's going through Diego, eventually your app will end up in a garden container, which is running in a docker container, which is running a Kubernetes spot, which is not the best thing if you want to debug that. So that's where Irene comes in, and that's actually the perfect use case for using Irene if you have a containerize Cloud Foundry. And with Irene, as we saw, all the Diego bits are removed, and now we have Irene. And when you see a push, the Cloud Controller points to Irene, which creates a native Kubernetes object on the same cluster that's actually running your Cloud Foundry. And that's how you get Kubernetes native implementation of Cloud Foundry, and you have a consistent operator experience because now you can just focus on operating one thing. And you get the best of both worlds because the CF push is not changed at all, which makes the developers happy, and the whole Cloud Foundry can be operated with just Kubernetes, which makes the operators happy. And also, this opens the possibility some new things, for example, deploying microservices both Kubernetes to Cloud Foundry. And this is a talk we had, me and Juice had yesterday. So in case you're interested in that, you can probably find it on YouTube in a few days. And you would think that installing this whole containerize Cloud Foundry with Irene would be hard, but it's actually not. The only thing you need is to help installs a few minutes, and then you have a running Kubernetes native implementation of Cloud Foundry in your cluster. And for more detailed instructions, you can go to our Irene release repo and take a look there. So the next use case is actually pretty similar to the previous one. It's just looking at it from a different point of view. If you're someone that's coming from the Kubernetes community, it's not a secret that you probably need a pass, a great pass. And Cloud Foundry can be that great pass, but previously with Bosch and Diego, the transition was not very smooth. And now with Irene containerize Cloud Foundry, as we saw with the previous use case, it's much easier. You get the whole Cloud Foundry as in Kubernetes, and you get your apps running as Kubernetes native objects. So that's the perfect time for someone that's just using Cloud Foundry to, that's just using Kubernetes to try out Cloud Foundry. And the last use case is when you don't have any skills at all. You don't know how to operate Bosch. You don't know how to operate Diego and Kubernetes. Maybe you don't want to do that. And what do you do in that case? You let someone else do it. And there are a lot of Kubernetes service offerings out there from IBM and SAP and Microsoft and Google and so on. And they can potentially be used with Irene. That way, you can have your own Cloud Foundry that you can manage and administrate. And someone else can do the scheduling bits. And someone else can do the cluster administration. And you shouldn't care about this. Or if you want to take it a step further, now you can use Cloud Foundry in enterprise environment, which comes with Kubernetes cluster, containerized Cloud Foundry, and now with Irene. So that's pretty cool. And summarize. Summary. Oh, can you? Great. So in summary, this is first thing is that this is the first talk about Irene without a demo. Seriously. So just recognize it. Yeah. So scheduling is commoditized. And we should abstract away the scheduling bits. And this is exactly what Irene does. And there is no best scheduler. It really depends on your use case. And you shouldn't just follow the hype. You should choose the scheduler that fits your needs. This is the summary. With that, thanks for listening. And I think we have another four minutes for questions that Dr. Jules will probably answer. No, just kidding. Yeah, we have another four minutes for questions. So if there are any questions, we're welcome. All right. Thanks, you two. In case you have a question, I am happy to bring a mic to you, such that you don't have to scream at us all. Questions? To production. Wouldn't Irene schedule to go proud? I didn't get it. Could you repeat the question a little bit louder? I just didn't hear it. What is Irene project schedule to go proud? To go proud. Good question. So we're still really early. And yeah, maybe Dr. Jules has any plans, because yes. So the answer was we go as fast as we can. It's now alpha available on IBM cloud. Yeah. And we try to give our best to get it production ready as soon as possible. Yes. Have you guys looked at the new use cases that might be enabled by having Kubernetes under the covers and with all the scaling, Kubernetes might provide and all the other primitives that Kubernetes provides and how that might benefit Cloud Foundry from operators from the view of scaling? If we took a look, not yet. So basically, we could enable every feature that Kubernetes basically could provide, of course. But there are no plans yet in that state to actually support it. But of course it could happen. In cases of resource contention on a pod, will Kubernetes be able to prioritize keeping the components like CAPI and other CF components alive above keeping app containers alive? Well, I guess that's more of a question for the containerization team. Yeah, CF containerization. Yeah. Because we just use them to deploy our Cloud Foundry, but we are only responsible for running the app. Exactly. Is the functionality that's being built, and you kind of alluded to this at the beginning of the talk, being built in a way that you could theoretically swap in any container orchestration platform provided that it implements some set of APIs? Yes. So we have this OPI currently in our code basis. And theoretically, you could plug in any ad orchestrator. But currently, we're just focusing on Kubernetes for the reasons that we set. But yes, we could. And actually, there was already a prototype for Knative, for example. And it works. And you could basically plug in any scheduler. PRs are welcome. Yes. So we're right on time, with maybe one last question, but other than that. All right. It doesn't seem like it. Thank you for attending this talk. Thanks.