 We will get started because we have a ton of demos. You lucky people. Welcome to our little Irini projects update. You might have heard of Irini. It is our little project to bring the CF push user experience to Kubernetes. This fine-looking, fresh-faced gentleman on the left is clearly not me. When was that? I need to shave bad, and this is Mario on the right, more accurately portrayed in his photograph. For the way this talk is structured, I'm going to talk really quickly to start off about the why of Irini, mostly so I can use the pun, why Irini. I'm not joking, that's going to appear on a slide really soon, and then we're going to talk about how it all works and show a demo of it. So, why Irini? Why are we, why did we decide to do this? Why do we think this is a good idea? I think there's lots of potential reasons you could want to do this, and I think all of these are good reasons, right? Having a consistent operator experience for the things you've CF pushed and the things you've kubectl applied, be able to integrate with the ecosystem of Kubernetes services, having the CF push experience available to a wider number of people without having to buy into all of the stack that we have. But I want to talk about my personal reason, the personal reason that I'm really passionate about the Irini stuff. That's why we're solving hard problems, right? We forget that what we do with computers is solve hard problems. Computers are actually hard, but we make them easier over time and we get to solve harder things. So, we've gone in the last few decades from MS-DOS to men on the moon to driverless cars all the way through fortnight to an iOS app that causes my girlfriend to get an ice cream delivered to the door, which seems like the height of decadence, but is a thing that has happened, right? How do you do that? How do we make that transition, right? You've got the same chimp-sized brains in our heads and yet we've been able to solve these bigger and bigger problems. There's basically two things that have ever worked and we've known them since 1983-ish in the Mythical Man month. It's finding good abstractions, finding high-level abstractions so you couldn't build a driverless car in assembly language, right? You have to find higher and high-level abstractions and removing accidental complexity. So, the idea is there's essential complexity, the complexity of the problem itself, and accidental complexity, the stuff we do to ourselves. And if we can remove the accidental complexity, we can focus on the actual complexity that we can't remove from our problems and build better and better things. So, to have an analogy, I want to play a game. The game is called Operating System or Web App. So, I say a programming language and you say whether you should use it to write operating systems or web apps, right? Are we ready? Operating system or web app? C. OK, if anyone said web app, we were going to have a problem. It is indeed operating system. So far, so good. We've got 100 in the bank now. Let's do another. So, Ruby. Ruby is a great programming language. I love Ruby. Not everyone agrees with me. I can feel in the room. It's like a mumble of discontent that happened. Ruby, operating system or web app? Ruby. Yes, correct. It is web app, right? If anyone is building an operating system in Ruby, come and talk to me after this. I want to have a conversation with you. I think I can help. So, the point is, it doesn't make C or Ruby better, right? It's not Ruby is better than C or C is better than Ruby. They each have a sweet spot where they're good. They each have a sweet spot what they're best for. You can build a web app with C, you can build an operating system with Ruby, but different things have different speeds where they help you more. So, how do you go from C to Ruby? Is it more features or less features? I think it's less. If you add features to C, you get C++. You don't get Ruby. You get Ruby by removing the pointers, removing direct memory management. You have to remove accidental complexity to find a high-level abstraction. That brings us to Kubernetes. So, what is Kubernetes? Have we heard of Kubernetes? Few people have heard of Kubernetes. I'm happy to. I googled it. This means I'm fairly sure nothing. But it is what it says. It's an extensible strach. What is it for? What is Kubernetes for? So, if you talk to people in the Kubernetes ecosystem, so Kelsey Hightower is one of the biggest thought leaders within the Kubernetes ecosystem, they will tell you Kubernetes is a platform for building platforms. I like this tweet because I'm pretty sure he's talking to us. If you're a developer building your own platform, App Engine Cloud Foundry or HerocuClone, Kubernetes is for you. I've added the emphasis, but I think we can see the point, right? So, Kubernetes has a sweet spot of distributed systems, of more complex distributed systems, of platforms. But what about stateless apps? What do we do about stateless apps? If we're honest, it's the vast majority of what most people should be doing. It's pushing stateless apps and binding services, right? Don't push databases to the cloud unless you want to be a DBA, right? Because you have to manage them and back them up and do all the other tasks. You probably want to bind those and use stateless apps. So, that brings me to one more buzzword. I like to maintain an average buzzword per minute rate of at least 0.2. You could feel it was dropping, so serverless. People get really annoyed by this serverless word. Oh, no, there is still servers. Yes. Actually, I really think the serverless word. I'm sorry, but I do. I think the serverless word is exactly correct. Because what cloud has always been about is the complexity we can remove. So, think about the original infrastructure as a service, right? The original cloud. What was that? It was data-centralus. Give me a server. I don't care if it's the server in J4 or K2. Just give me a server. Make the data center disappear as a concept I have to think about. What is Paz? What is this Paz thing? It has always been operating system management-less, middle-well-less, scheduling-less, container-less. Those are the concepts we're letting people not think about. And that's actually the point of what I think we've been doing. It's what accidental complexity can you remove. So, you can think, therefore, of a gradiation of abstraction as you move up. Like containers at the bottom moving up towards functions. And as you go up, you do more and more dev less and less ups, right? You focus more and more on your problems less and less on operating your problems. The advantage of Kubernetes, the opportunity of Kubernetes, now we have this dial tone at the bottom, is now you can have multiple abstractions, all interacting, all interoperating on top of that framework, which means you can use the right tool for each part of the problem without having to pick an entire different ecosystem. I was joking so on the other day. One of those jokes is kind of like, but it's true. Ruby is a great language. But if I had to use a Mac to use Ruby, I'd probably stick with C even for my web app, right? Asking people to buy into a whole different system is tough, whereas now we can offer them the CF push user experience on whatever platform they're using, as long as it's Kubernetes. So... That's my answer for why Ruby. That's my personal answer. I think we want to bring the CF push user experience, this high-level, productive, stateless app-focused user experience and make it really easy for people to adopt and use wherever we find them, which is often on Kubernetes. So that's my answer to why Ruby. And now, in my favourite pun, basically ever, we're going to talk about how Ruby. So let's actually see how this Irene thing works. And there's a lot to show and very little time to show it. So this is the current status quo, and this is what Jules calls a squintogram. So you don't really read the boxes. You just sort of squint and determine that there's a lot of them. So I'd like you to just focus on the big blue bits, which is Diego, the current Cloud Foundrys culture. So looking at this picture, how do we actually put Kubernetes in here? Pretty simple. So we let Kubernetes do what Kubernetes does best, which is schedule containers. And in between the Cloud Foundry bits and Kubernetes, we have a small layer called Irene, which will tie everything together. So let's actually dive into this Irene box. And this is what it looks like. It's kind of busy. Again, it's a squintogram, so don't really bother. But the red bits on this diagram are related to getting your app staged and running in Kubernetes. And in the peripherals, we have a bit of components that are doing metrics and logging and routes. So we'll go over those in a minute, but let's actually focus for now on staging and running your application in Kubernetes. So let's actually jump to the terminal and do a CF push. And I'm going to push Dora. And on the top left, you're going to see the CF app Dora on a watch. And the right, we're watching the pods on Kubernetes. So this will kind of stay for a while. So let's go back to the presentation and see what actually we're doing. So if you do a CF push like we just did, nothing in the CLI changes. So you're getting the absolutely same experience. But the difference is that the Cloud Controller will then send a desire request to Irene. And Irene will then take that desire request and convert it into something that Kubernetes can understand, which is a stateful set. And after that point, it's really up to Kubernetes to actually schedule an instance of your application. And if you go back, it should be fairly done with staging pretty soon. It is Ruby, so it takes between two to five years. And it is done. So you can see the Dora dev and then a funny name pod in Kubernetes. And we're actually getting an instance reported by CF on the left. So cool, but does it work? Can I actually curl it? And I don't have a... I do. So it does. It says, hi, I'm Dora. So at this point, it's really easy to use features that Kubernetes has to implement features for Cloud Foundry. For example, if you do a CF scale, again, the Cloud Controller is going to do an update request to Irene. And all Irene has to do is change the replicas field of that stateful set to four, for example. And again, it's up to Kubernetes on its own time to schedule those four instances. And if one of those instances crashes or dies, then it's Kubernetes job to bring it back up. This way, we can reuse features like CPU memory limits, readiness checks and health checks to, again, reach that parity with Diego on features. So let's actually see this in action. So I'm going to scale my app to, let's say, two instances. And you can see that immediately we have a container creating and I need to actually scale this. So you can see. Oh, come on. There it is. So we actually get that other instance listed as starting in CF app. And if I go and copy this and say kubestio, delete pod, we can see that pod will be reported as down. OK, it's kind of missed the thing because of the watch, but it will be reported as down in the CF apps at the same time. So at this point, you probably have a question along the lines of, all right, Cloud Foundry uses droplets and Kubernetes uses images. How do you kind of get those things to work together? And the answer is that we're keeping droplets and staging the same in Irini. You still get automatic build-back detection. You still get that droplet produced. But the difference is that we actually have a registry running which will use that droplet to generate an image and serve that image to Kubernetes when your app is started. And that registry used to be a part of Irini, but right now it's a part of BitService. So thank you to them for taking that piece of code into their codebase. And that's all fine and well, but how do you actually create an image? And for that, let's actually take a step back and see how we do staging. So again, if you do a CF push, Cloud Controller will send a staging request to Irini. Now Irini will create a job in Kubernetes, which is just a one-off task. And that task is running what we call the recipe code. Now the recipe code does exactly the same things that Diego does. So it will pull down your app bits. It will run the build-back app lifecycle code, which will detect build-backs and build the droplet. And when it's done, it will upload the droplet back to Plelfoundry so BitService can have access to it later. But what's really in the image? So we use the droplet, which is your app as a single layer of the image, and the rest of the layer is a rule of us which we of course call IriniFS. And what IriniFS is, is just CFLenuxFS and another component, which is called the launcher in Diego, which will actually launch that. So we're even launching the application the exact same way. And Kubernetes will receive just a URL, which it can just pull the image like any other Docker registry out there. So if we go back to the terminal and I'm going to push Dora2. I'm very creative with names, by the way. We're going to see that we have a job that's going to be created when staging starts. There it is. And if I actually get that pod and I say kubectl logs-f, we're going to see the logs of staging which are being streamed from that pod to the CLI. And we're going to go over how we do that in a minute. So after this staging is done, I just want to grab the pod name. And again, Ruby is slow. So it takes a while. So any second now. There it is. So that's the pod. And I just want to focus this pain for now. So I'm going to say kubectl describe pod. And I'm going to grab for the image. So we can see the URL here. And this URL is actually public. So if I copy it, I can do something like Docker pull. I need to add a tooth here. And this will actually pull down the image exactly the same as any other Docker registry out there. And we have, you can see that one of the layers already exists, that's the ArrheniaFS. And the other layer is actually the droplet. So we build this image in BitService on the fly. So BitService will just pick up the current droplet for your app. It will pick up the current ArrheniaFS and it will serve that image when you do the request. So this will take a while to pull. So let's move on. So the next thing I want to talk about is logging. And we do logging by using a data collector called Fluendi. And what Fluendi does is it runs on each node of Kubernetes. And it's looking for the container logs on the node itself from the container runtime. And we have a plugin for Fluendi that will filter out the containers that are only coming from ArrheniaFS and it will send them over to Logurgator. So when you do CF logs, you actually get your application logs immediately. And this Dora has been running for a while. So if I do CF logs Dora. And if I curl Dora on STD out with hello, you can see the log immediately. And let's do STDR. Hello. There. Awesome. So that's it for logging. And the last bit that I want to talk to you about is how we do things like routes, crashes, and metrics. And these things work kind of the same way. So we have an emitter running on ArrheniaFS, which is just a go routine that's on a loop. We'll take some information from Kubernetes and send them to the relevant component in Cloud Foundry. So to get the information from Kubernetes you are using a bunch of APIs. And one of the APIs is the Informer API. And the Informer API works when you actually create, update, or delete a resource in Kubernetes that will send an event. And that API will send that event. It will forward it to whatever Informer you have running on your client side. So for example, for routes, we have a route emitter that listens for path creations and deletions. So when you, for example, scale your application, that will create an event in Kubernetes, which will be forwarded to our route emitter. And the route emitter will emit a registration message for that new instance to go router. And on the other hand, if that instance crashes or dies again, that also produces an event which will be forwarded. And then we have to unregister that route in go router. So what kind of information are we looking for? Well, we use the pod IPs to actually send that registration event to the go router. And it's important for the go router to actually have access to that network, which is private on Kubernetes. Now we do it by running the go router directly on Kubernetes. But there's also other ways to do that as well. So let's actually see here. I'm going to do a watch on Dora's host endpoint, which I made. And this will just print out which pod index it's coming from. So we can see that we're hitting both instances. And if I do a CF scale on Dora to three instances, this should actually include the new instance when it's ready. That wasn't the right timing. So now you can see that we're hitting one, two, and zero instances. So how do we actually get the route information though? Well, we're storing it in the stateful set. So when you create your app, that information will be stored in the annotations of the stateful set. So if I do a kubectl describe pod on Dora, actually a one at stateful set. So if I do describe stateful set, and this is a different hash, there it is. We can see that the annotations have a routes field. And we can see our route in there. So if I do a CF create route with a new host name summit, and I'm going to change the watch here to that new route, just so we can see that it works. OK, we can see a four or four. So if I do CF map route to Dora, we're going to see that almost immediately we get the same output. And if I do kubectl describe on that stateful set again, we can see that it has added the new route to that annotation. Great. So the next bit is crash events. So we do crash events in the same way. So when your app crashes, it sends an event to the event emitter in Irini. And that will tell Cloud Controller, OK, this instance has crashed. And I think we have enough time to try this out. So I'm going to do something really stupid here. I'm going to SH into one of the pods. I'm going to say PSOX, and I'm going to say kill minus nine on PID six. So take just focus on the CF apps on the left, which hopefully will resize soon. So this will kick me out of the container, and hopefully we're going to get the crash event directly in CF apps immediately after it happens. So that's how crash events work. And the last thing that we've done is metrics. So metrics don't use the Informer API. They use what's called as the Heapster Service in Kubernetes. And the Heapster Service collects metrics for pods. So all the metrics emitter has to do is curl that service, get the metrics for our applications running in the Irini namespace, and then send that information to Doppler. So if you actually look at the output of CF app, you can see that we have, well, Dora isn't pretty busy on CPU, but we have a bunch of memory. And the weird thing is that you see this 40.1 megabytes of memory of disk, actually, which looks kind of wrong. And it's wrong because it's a lie. Heapster Service doesn't actually report disk. So it's just CPU on memory. We're still working on getting this to work. So I think that with this, that was how we...