 Welcome to Intro to Irini. This... I am deeply aware that it's this talk, the next talk, and then beer. This is worth it, I promise. This is our Intro to Irini talk. If you haven't heard of Irini, which I'm now realising might well be pronounced Irini. Are there any Greek people in the audience? Is Irini or Irini? Irini, neither, is the answer to that question. So, we're going to get started. So, this is our talk about Intro to Irini. Clickers never work on stage. There is no clicker. There is no clicker. Watch that. So, I am Dr. Jules. I'm an IBMer. I'm a proud Irini-er. Don't worry, Kubernetes is more stable than Keynote. This is Herjules, my colleague, and we are going to tell you about Irini. This talk is in three parts. The first part is why we have done such a thing as Irini. The second part is how we did such a thing as Irini. And the third part is a demo of the thing that we did called Irini. Or, this is my favorite slide, in the words of the Buddha, there is suffering, there is a path to end suffering, and this is a demo of the path to end suffering. So, if that doesn't set us up, I don't know what does. Let's talk about why we're doing this. Okay. So, for the last few summits, there has been a big, scary elephant in the room. And this elephant has a name. It's Kubernetes. And what we did, we give it a last-carry name, right? C-F-C-R. And we put it next to C-F-A-R. But renaming things doesn't really solve our problems, right? It's still Cloud Foundry and Kubernetes. Click. Okay, I think I have to switch to this click. Ah, there we go. And people start asking questions. Is it Kubernetes versus Cloud Foundry? Is it Kubernetes and Cloud Foundry? Or is it Kubernetes or Cloud Foundry? And that's all valid questions, right? But before we get to these questions, let's first talk about what Cloud Foundry and what Kubernetes is. So, Cloud Foundry is two things. It is the C-F push my app, C-F bind my service. And of course, it's the container orchestrator, Diego and Garden, so that the thing that runs all the stuff in the Cloud, the how. And the C-F push is the developer experience. And you can also see this from another perspective, which is from a role perspective. And the C-F push here is the developer role. And all the backend stuff, like the container orchestrator, Diego and Garden, this is the operator role. So we have here a clear separation of concerns. But what we really want and what I really love is the C-F push experience. And I'm quite sure that a lot of people here love the C-F push experience that Cloud Foundry provides us. I mean, the container orchestrator is great. And I mean, Jules is passionate about Garden and all the low-level stuff containers. But what we really love is the C-F push, and that's what we want. So what is Kubernetes? Kubernetes is deployment, stateful sets, replica sets, nodes, podstains, annotations, demon sets. I could do that forever. And looking at this from a role perspective, they're a little bit overlapping, right? So somebody who deploys an app could be a developer or maybe an operator. There are some tools like Package Manager, like Helm. This is for developers to make it easier. But still, it's not clear. And still, with all these options, Kubernetes is a powerful and a great scheduler and it gives the operators the flexibility they need. And that's where Kubernetes has its place. But for the developers, it's maybe a little bit too complex, right? And as the founders already of Kubernetes said, it is a platform platform, which means that Kubernetes is a platform to build platforms and it's not a platform by itself. So it's never intended to be this developer experience like Cloud Foundry. So we have them side by side. We have the developers of Cloud Foundry and we have the operators of Cloud Foundry on the left side and we have Kubernetes on the other side with their operators and developers. But what we really would like to have is this. We would like to have this developer experience from Cloud Foundry and make the operators happy with Kubernetes and all the flexibility and power that it has. And a few people already suggested some solutions to this and the first option that I want to talk about is one that we already mentioned. It's putting it side by side. So putting it side by side, yeah, we have the full power of Cloud Foundry, we have the CF push experience and yeah, we have the full power of Kubernetes, but there's still some downsides with this. You have two sets of nodes, you have two schedulers to monitor, you have two different ops models and you have two communities. It's a lot of effort. So option two is Bosch to the rescue. So what is that? It is using Bosch and deploy Cloud Foundry to Kube. So using a Kubernetes CPI and while deploying containerized CF and put it on top of Kubernetes. And this solves density so you only have one set of nodes which is the Kubernetes nodes. And this is great, but it still has some downsides. So you deploy a complex thing on top of another complex thing and neither the CF apps nor CF has really any benefits of the Kube scheduler which is a great scheduler, right? And now you have two problems. You had N plus M complex things but now you have N time complex things. For example, let's say CPU limits. So if a container runs out of memory, is it the inner container or the outer container? Now let's go to option three. Containerized Cloud Foundry. This is another option, I guess. It is converting Bosch releases upfront before we deploy and now we can use Kubernetes native ways to deploy Cloud Foundry which is awesome, like Helm. It makes it really easy. And now we have a real benefit. And by the way, this is now available on IBM Cloud. It's called Cloud Foundry Enterprise Edition so make sure to check it out. Yeah, but now let's... I still have to say there are also some downsides to this. We still have that complex thing and another complex thing. We have this Diego scheduler inside the Kubernetes scheduler which is scheduling containers into containers. We have nested containers. It's weird, we have still this N time and problems. So we looked at three options but each of those options isn't really a good option. So what's the solution to this and this is something that Jules will tell you. Cool, so I get to look smart by telling you the solution. So let's talk about the solution and to talk about the solution, we want to talk about the goals, what we're trying to achieve. So to summarize again, we want to keep that CF push experience that lets you focus on your code, bind services, push stateless code and don't worry about all the other stuff. But we don't want to make people learn and manage a new scheduler to do that. We want them to be able to reuse the existing knowledge with a consistent experience and yeah, we'd like to have one community and bring these together. I've spent a lot of time talking to container people and I'd try and tell them about how great it is to just be able to see a push and not worry about things and then I have to explain to them that they're going to have to learn Diego and then they're going to have to learn Bosch and you can see the enthusiasm drain away as you start to describe that. And that's fair, these are great technologies but asking anybody to learn a whole different technology to use your stuff is a big ask even when your stuff is great, which it is. So what do we do about it? CF is a developer experience. It's a developer experience that I love and I want to use. But Kube is a scheduler that's got increasing amounts of mind share and that a lot of operators already offer and already have skills in. Let's use Kubernetes as the cloud foundry scheduler and that way our developers are happy, our operators are happy, everyone is happy. Let's all be happy. I knew if I paused long enough. So this is Project Irini, this is what this is, this is OPI, the orchestrator provider interface and we're obviously riffing on this Bosch idea of a cloud provider interface which is how Bosch is able to run on all these different infrastructures as a service. It's the same idea about the container layer. We're only going to implement it for Kubernetes for now but actually being decoupled from that Kubernetes abstraction just seems like a good idea. So we have this orchestrator provider interface. What does that look like? Here is a, I call these complexity diagrams, boxes. You're not supposed to read the boxes, you're going to be able to kind of squint and see there's lots of them. How do we, how do we Kubernetes file this? How do we Irini or Irini or neither of the two if I this? We do that. As you can see it's much more simple by squinting. The big blue thing is Kubernetes. Small pink thing is a little mapping layer which is called Irini and in the original Diego architecture we have a sync loop that converges things in the cloud controller database, your apps, with things in the Diego database, your containers, your LRPs. The same thing happens with Irini. We take the states in the cloud controller, your apps and we sync them into Kubernetes into deployments, stateful sets and services. So let's dive into that. You do a CF push. As this chart demonstrates, it goes through Irini, it becomes Kubernetes objects. Specifically, if you know about CF, you'll know that CF thinks in terms of droplets and root file systems, whereas Kubernetes thinks in terms of images. What we didn't want to do was build something that was mapping to Kubernetes but was mapping to non-native Kubernetes objects. We wanted to make sure that the things that end up in Kubernetes are as native and normal as possible so all your regular workflows and knowledge works if you're an operator of them. So we convert the droplets into a Docker image before we send it to Kubernetes with a custom registry that puts the droplet on top of the root FS. It looks like this. If you think of an app container, there's the top layer which is my app. That's a tar file that we untar onto a root FS container. In Kubernetes, instead, we just create on the fly this OCI image describing the same thing. So we then have a URL for that object, that collection of layers, that describes what we would like Kubernetes to do, which is the droplet and the root FS. That means all the cloud controller stuff about droplets stays exactly the same. You can still roll back droplet versions. You still got automated patching. So the registry is now in bit service. It's just a custom registry. It sits on top of the blob store. That's how we map droplets to images natively. We also map apps to stateful sets or deployments. They're currently stateful sets just to maintain parity of the instance index field. But hopefully they'll move to deployments when we deprecate instance index. And we also map all your roots into services and go roots of stuff. So everything in the cloud controller stays the same, and it's just a convergence loop into totally native Kubernetes objects. There is one other thing. This is our staging component. So we can do staging without needing Diego. We run the same build packs code in a Kubernetes job. We obviously call it Stagenetys. That's how to spell Stagenetys. And it's just a kube job that converts your stuff, and it just uploads droplets. It doesn't upload images because we want to keep the ability to keep stuff patched and to roll back droplet versions. So, brief aside, why didn't we do this before? If it's such a good idea, why are we only doing it now? We had genuinely, I think, very good reasons for hesitating to do this for a while. The main one was it wasn't time. There was a lot of movement in the scheduler market and spending lots of effort moving us to a scheduler at that time didn't deliver a lot of value to any particular user. Who cared right then? Why do people care now? I think because scheduling is now a commodity. And the fact that it's a commodity means there's a huge market of tools and services and mind share and tutorials and skills around running it. And therefore, giving people the option of using those makes a lot of sense. It means you can delegate all of your Kubernetes operator stuff to a Kubernetes service and just run the CF bits while still getting all the benefits of CF push. Think about the Haiku. You can't do a CF summit talk without mentioning the Haiku is actually a rule. There's a Haiku about the rule, I suspect. Here is my code, run it on the cloud for me, I don't care how. The great thing about that, the CF promise has always been you just care about CF push, we will care about the how, and you will not have to change how you work as that how changes. As new things like Istio come along, they will get imported into the platform without you changing stuff. As new things like Kubernetes come along, they will be imported into the platform without you changing stuff. I think that's pretty cool. So, what changed really is its time. It's time now. CF push is always the thing that we cared about, and now that Kubernetes is a commodity, we can make everyone happy, and so we should. Let's have a demo of the end of suffering as a hell of a setup for a demo. Let's watch it happen. Okay, let's get to the cool stuff, a demo. Before we start with the demo, actually, I just want to talk a little bit about the environment to set up. So, on IBM Cloud, I permissioned a Kubernetes cluster using the IBM Kubernetes service, and on top of that, I deployed a container ICF, including Irini, and you will see my terminal, and it's basically split up in two paints. So, there will be four, but the left pane would be the Cloud Foundry part, so I will perform all the CF-CLI commands, and the right side will be the Kubernetes part, and I will perform the Kube CTL commands on that side, and this is really the... the left side is really the developer role, and the right side is the operator role. And we have a perfect balance between the developers and the operators. Okay, so let's get to the demo. Okay, I hope that everybody can read my terminal. I think that is the perfect size. Well, I hope it's the perfect size. So, as said, left side, Cloud Foundry, on the left of a pane I did a watch on the CF apps, and I already deployed an app which is called Hello Summit, so hello, and on the right side, there is the Kube object of this app. It has... the name is basically the GYD of that app, and it's running, but we will push an app immediately, another one, so that you can see how Irini works and how things appear on Kube and how on CF. Yeah, let's first take a look at the containerized CF deployment. So let's make this a bigger screen for now. I hope this will switch for you. So here is the whole Cloud Foundry as you are used to. There are all the components, but with one difference, there is no Diego included here, right? So you don't see any Diego component, but what you see is the Irini component which will do all the work for Cloud Foundry to schedule the apps on top of Kubernetes. And here we have the Irini namespace. This is the namespace where all the apps are ending up. Great. So let's start. Let's push an app, and let's call this for now. CF. Push an app, and let's see what happens. So you see the basic Cloud Foundry output, but what you see now is the staging locks because they currently are not streamed to the CF CLI, but you see here that there is a pod which is doing the staging on Kubernetes side, and what we could do is we could simply show the locks of that staging job, and you should see the exact output of the staging that you usually see on the CF push. So let's just wait till it stages the app, and then we'll see that one-of-one instances on the Cloud Foundry side will run, so the staging job is done, the app is already scheduled, it just needs to get ready, so this takes a second or something, maybe two, maybe three, but there we go. It runs, we have a running app. You see here an app and it has an URL, and we will curl it in a second, but let's first perform some basic CF CLI commands like let's CF stop it, CF stop an app, let's see what happens. So you already see that on Kubernetes side, so on the right side, it just terminated the app, but CF apps still say the app is there and it's just zero-of-one instances running, so let's bring the app back up again, CF start an app, there we go, and now you will see in the upper right pane on the Kubernetes side how the pod comes up again, so there we go, awesome, right, that's great. So it's just waiting till the app is ready, it's ready, and we already see the instances also coming back up, one-of-one instances, and what we, of course, also can do is restarting an app, which is nothing else than starting an app and now I would like to scale an app, let's say three instances, so Irini is deploying stateful sets and the last digit here in the name is actually the instance number, so now you see that it will first schedule the first and then the second and the third instance of that app and when everything is ready you also see that three instances are running, which is awesome and now let's scale it back down to one instance, this also works fine, so we'll see that terminates again and also the instance count is updated on the CF apps, cool, so now let's curl the app, actually, that's one interesting part of the whole demo, right? So, I want to curl an app and there we go, hello CF from Kubernetes and we also have this Eratos integration, which is basically aggregated on top of Kube and you can use CF tail an app to show the logs, nice, so I have an output, I was curled when I curled the app, so let's curl again and then let's see what the tailing app says and you see that the curl was locked, awesome, so I think that's enough for the demo, if you go to the editor, there is enough documentation on how you can set it up yourself and play around, it's really easy, it's just two helm installs, so make sure to check it out, it's not a big deal and have fun with the arena. So, back to the slides. That's pretty cool, right? All right, and for the thrilling conclusion, the edge is now much the middle, and all comes together. So, the summary, CF is the demo experience that I love and I think a lot of us love. I think if you've played around with Kubernetes for a while, you realize it's just an awesome piece of software, but if you watch the ease of doing that CF push and CF scale and CF tail, I think you understand why we're excited about being able to bring this CF push experience to Kubernetes people . Where are we with this? This is now an official CF incubator project since the last summit, which is really awesome. We currently have two pairs working on it, two from SAP, two from IBM. That was because we wanted to go really fast to begin with and less pairs go faster to start with. We're hoping to scale that up soon and actually get more people working on this, we're now ready to start doing that. We're going to start with the CF test. Most of the cats are passing, which are the Cloud Foundry acceptance tests, most of the core cats. There's a very small number that we're just finishing up. It looks larger than it is because there's one or two fixes that fix about 15 things that tends to happen. You can install on a GCP or IBM cloud. You can probably install on most other Kubernetes services. You can kick the tires, there's still lots of reasons you might want to use Diego or Bosch, but if you have a Kubernetes or a Kubernetes as a service and you want to try this out, it is just two helm installs to do that. We are all around the conference and super excited to talk to people if you have any questions. Or we also have a whole three and a half minutes for any questions that you have. Before I go to questions, this is not all of our work. These are the other people on the team. This is me and Hedges at the front. Stefan Ulig, Georgi Dankov, Maria, Kiev, Simon, Moza and Andrew Edgar that are making this happen. With that, if there are any questions. The current model is this just an Irene namespace which you don't have permissions on, which is kind of similar to the Bosch model where there's like an operator for your Cloud Foundry and you see what I mean. One of the nice things about this approach is that obviously it would scale quite nicely to saying, well, this operator just operates these apps in this space, for example. So we could put different role-based access controls on particular apps or particular spaces. One of the things we hope to look at after we hit the first milestone, the first milestone is all the cats passing to people to use with the core functionality. We're then going to start looking at other features of CF to move over. And one of the features we've heard a lot of interest in is isolation segments. So with this approach, we could either implement isolation segments by having different namespaces in the one Kubernetes or actually each organization could have a different Kubernetes that it syncs to, which would enable each org from your CF control plane to have a completely different Kubernetes potentially. Yeah. I think to begin with, we want to do parity for the model for the apps. Although I hope that by doing something like this, this will start to bring the two communities together. So we can change some of the CF abstractions for both schedulers if you see what I mean. So we'll have people from that community getting more involved with this, with this feedback and a virtuous cycle of both of them changing. I also hope that over time various of the other components will start to become more integrated with Kubernetes. So you already see that happening with things like the GoRooters move to Istio. Hopefully more and more things will start doing that kind of stuff. You could imagine, I don't think this will happen soon, but you could imagine let's say it will be nice if operators didn't have to use a database. Let's allow CRDs to be used as a backing store for the Cloud Controller database. You can imagine all sorts of things like that that could evolve over time. But for now what we want to do is just have the most native operator experience for the apps that are pushed with the same exact CF push experience for the developers. So it depends where the limit is. The limits are enforced in the Cloud Controller as part of the orgs and spaces and quotas model that exists up there. And those would stay. Those are part of that kind of rapid application 12 factor thing around Cloud Foundry. There are other things that are limitations of the platform that might change. So one thing I'm quite excited about, one thing I really like the idea of is there's this idea in the Kube ecosystem that there are no nodes in the Kube Kubernetes cluster. There's a node in there that pretends like it's any size behind the scenes. The provider is doing everything to make that work. But it means that you don't have to set up n nodes in advance at all. You can just CF push and have it scale up and down and someone else manages that. Someone else deals with the complexity of making that happen. And you just do your CF push and pay for as many containers as that takes. So I think use cases like that come out of using this commodity technology. Now that that commodity is available as a service, it enables you to do things like not pay for whole environment resources because they're available as a service. It depends which where that resource limit is. You need to go to develop brands for now or will we? Yes, the helmet instructions because obviously we are developing this for CF summits. Let's just admit it. We wanted to have this ready for CF summits and we do when we merge it. So at the moment look in the develop branch or ping us on Slack. We're in Irene Dev on Cloud Foundry Slack and we will very happily hold everyone's hands getting started with the helm stuff. We really want people to kick the ties and start giving us feedback about what does and doesn't work and what does and doesn't make sense when you use this for real stuff. This is splashing times up and I didn't see any hands in the last six seconds. So thank you very much. Have a great rest of the conference.