 I'm going to talk about development on production, which is always a big oh no for everyone. And it's primarily, I believe it, because you will eventually affect the users that are there. But it's a startup with a quote that I need to add to all of my talks, essentially, that if debugging is the process of removing it and programming is the process of adding bugs to your system. But this is a story that starts with a project called Opuship I.O. If anyone knows about that before, remember about it? Some I know know it more than others. It's essentially a project that's set out to create a developer SAS in the sense that it was supposed to automate all the processes around a project. So automatically set up, build pipelines, create a project for you through a simple UI where you can essentially just say, hey, I want to make a Go service that connects to these type of services. And now you have a repository that repository automatically deploys to a certain cluster and so on for you. In that process of building that, it started out as a Greenfield project in the beginning, and coming out from a lot of developers from tooling, from hybrids, and not necessarily experienced cloud developers. But the beginning is always nice in a project. You can always start out with just this service or a application that you run on your local machine. And the world is kind of nice. You can start, you can debug it, you can do whatever you want. And as it starts to grow out a bit more, you start to get into the containers and so on. This was a platform or a software services that was going to run on top of Opuship, so it eventually would have to be some form of containers. And still, in the early days, there was just a couple of containers. There was a few services. Easy enough to run through something like MiniShift or some form of Opuship instance that you can run on your own machine. And then you start to run into the kind of distributed computing fun things that you don't have to deal with when you have an unalived or one service that the network isn't reliable, and there is latency in the bandwidth, it isn't internet, and so on. And you start to throw the developer experience out the window. Because when it runs in OpenShift or in the container platform, it's not as easy as it used to be. There used to be that one in the process that you can deal with, and you can run that database on the side, and you were kind of fine. So as developers, we wanted to get the speed back, essentially. We had problems with getting this to run in a proper way. So we were still on the local cloud that you ran your own machine. So each developer had their own cloud instances, essentially, trying to run this whole app. And the tools we were looking at then to kind of take back some of the control we lost when going to the container runtime was things like Squash, to be able to debug these services as they're running in the cluster, which has nice ID integration. And you can essentially just select the process that you want to debug, and it will connect and set it up for you. Another one was telepresence. What that does is essentially you tell it which deployment you want to develop on locally, which is outside of the cluster. Then it changes the deployment in the cluster, installs a proxy instead, and then reroutes the traffic to your local machine, and then your local machine out back into the cluster. So it becomes similar to if the local service was in the cluster. There are certain small differences, but you're kind of getting somewhere closer. Some other tools around the same bit, but it's essentially Squash for debugging and telepresence for the local experience, but still having a local cluster. So you have the local experience outside of the local cluster. And then the project grows, or grew rather, and it became more and more services and more and more different pipelines that were going into the different environments. And it started to become impossible to run that neural machine. You've left up essentially burn up. When you have 30, 40, 50 different services that are all going to run, and they're all going to have some special setup somewhere, whether it's open shift or mini shift or whatever, and you end up with a complete mess, essentially. So to fix that, you move to a developer cloud, which is outside of your laptop. And then you have a testing cloud and a staging cloud. And we've all probably been in a project that has states and test environments and so on. And the theory is that they all look the same, but they never actually do. So the chances that something gets deployed to the test environment works when it comes to production, is fairly slim. Or that you're seeing different errors in staging, either because infrastructure is different, or because the version of something is different. Or you have 50 different streams of services that are at different stages in a development cycle continuously being deployed. So which version you're actually targeting in a different environment also is kind of magical. So if it has the development, it doesn't mean that it's necessarily the same version you're going to hit in the staging. So that doesn't really help. You get a lot of moving pieces. And you start to block each other because you have to wait for that version before the other service can go through the all the way to prod. Of course, you can try to deal with contracts and so on, but that's also going to be not always work. And still, you don't have any open experience anymore. Challenging to keep them in sync, I mean, one thing is that you have the same versions of the services, but then you're missing the data as well. And trying to keep the data alive in the different environments is even more problematic. So you potentially hit something in prod because a version of a service returns a name with a dash in it for some reason, because it's all data that you've never seen before in the staging environment, as an example. All these kind of small, weird things that will happen. And of course, it's extra cost as well. So if you're going to run four different clouds, then you're probably going to take some shortcuts. So the development cloud isn't actually a cloud cloud. It's running on some bare bones. So that has insulation differences. And in the staging, it doesn't have the same SLA. So when I'm inside a disk, then there's no one who can fix that for a week or so, so you're kind of stuck. And then it's just the uncertainty. The reason why you have all of these different environments is essentially to ensure that when you go to prod, you have gone through all the different, or you have ensured that it runs. But if all the different environments are different, then what does it matter when it actually hits prod? Because you're still wondering, will it work when it actually gets there? So you're not any more sure. You just know that you fixed a bunch of other issues along the way that may or may not actually influence prod. And prod has different issues again. So does it work? Who knows? So at this point was when I left the Object IO project and started to realize that there's a few more fundamental things in the kind of cloud development model that needs to be fixed before, at least personally, where all the pipelines are automated. It's nice, but there's a whole lot of development tooling, essentially, that needs to go in before that. So we started to look at an easy way to validate that the changes that you're doing are not interfering with the other services and users. As I was pointing to earlier, the main reason why you don't develop in prod is because you're going to affect someone. But if you could develop in production without affecting someone, then does it really matter? We wanted to interact with the other services as opposed to interact with another version of a service or another version of a service with a different header set. So actually touch the real thing as opposed to something that seems to be the real thing. And then also have the ability to develop this as if you were developing a local service in the way you do in your favorite tools and so on, without having to go necessarily to a cloud ID or similar. So based on that, we started out with something we needed to run on top of OpenShift. We started looking at telepresence. And then on top of that, Istio. I'm sure most of you know what Istio is. Not everyone, most people. I'm not going to go too deep into it, but this is essentially on a top level view what it does. It injects proxies into all of the server that runs next or intercepts all of the services. And then injects programmatic control over how the data flows, how it's, how the authentication and so on. So you can control without touching the service. And we were then wondering how can we actually use this to our advantage, essentially. So it injects security control and it injects metrics, data recovery, but then on the interesting part, traffic mirroring and traffic splitting and also traffic routing. That's more information from there. So meet Ike. Ike is the project or the prototype list of that ID that's going to try to fix this thing. So what is Ike? Ike is, on one side, it's a Kubernetes operator that coordinates the configuration of the cluster for you. And there's a command line tool that initiates essentially what you want the operator to do. So let's have a look at what that actually looks like. So we have an application here today. This is the prod view. It's not a very fancy application, but it's essentially a whole stack. So we have one service that calls two other services that call each one on their own. So a total of five services. And if we now initiate Ike, just to get it started, this is the command line that's being called. It's essentially saying that we want to do a development thing. This service doesn't require a local build since it's just a Ruby service. We want to target the service that's deployed that's called RankThings version 1, port mapping. And we want it to route based on this header. And we want to watch this file. And we call this session for the feature x. The service that we are taking over with is fairly simple, in this case. It's just returning some JSON. But if this is the production, what happened now is that Ike set up a session for us. So we asked it essentially to say, find that one, creating service. Then it would prospect it. OK, I've changed. I've created a clone of the deployment. I've modified a virtual service and its nation rule. And it's also set up now a special URL for us. So when we hit the feature x, it will automatically add the headers to the calls. It goes to the whole stack, which allows us to route just one specific part of it over to our new service. So that's now the service that we have taken over, the one that's now called DevConfU instead of the locations. But it's still the production version of product page and reviews, and the others that are being called. And if we can see here, we can now see that we have the local control of that service, that service once. So we can change it to have some different color just for some effects. And that's the local control. Now it's still running. So the whole production environment is still in prod. There's only one certain subsix now that is being taken over. And we also have the ability to join that session. So multiple developers can join in and have the same route essentially and take over a different set of services. The only thing that happens is that it's, oh, I can look at the key already, the view of it. So you have two subparts of the rating service. One that's the prod, which is TV1. And then the new one that is the feature X. Just a minute, ready? No, it's not. So there's a fun little bug. And we can see back on the prod view, of course, that there's nothing that has changed here. That still works as it was. There we go. Now we have two services that are up and running in the same kind of outed way. So developer one and two can join in on whatever changed it, whatever changed that they need done. And then, of course, you can start up a second session as well that we create another full route, but then runs a second version of one specific service. Now there's two developer versions of that service that is running in prod, but they're not seeing each other. And prod is still not seeing any of them, which gives you a quality really not quite like that. So more traffic. There. Now we have two different versions of one of them. Any questions so far, by the way? Everyone understand generally how this can be used and works? And of course, when you have hacked the operator a bit, so it doesn't remove the actual external routes, but you can see when we shut down the local development here again that it returns to the normal prod versions in all of them. So that's just the beginning of what we're starting to look at. Some of the next steps are making this not just a mankind tool, but an option to integrate it into DS code or Eclipse and that kind of thing as well, to have it more easy accessible. CICD use cases is one of the things we're going to look at first, to see when you can run a test in a CI system and you run it through a special route with the new built image. And only the CI job can actually see that. And if the CI job, for instance, stops due to a test failure, leaving that route up so you can join it and debug the reason why it fails and so on, have some interesting cases. Of course, this is all fun and games when you have the state less things, but when you have state full services, you have a whole range of other issues. You probably don't want to change the database of a service just because you made some fun things or affect the real users through the databases and so on. So interesting things like maybe routing to alternative databases or a form of layered database where you can write and change the data, but only you within your own context can read them and so on. Those are interesting problems to solve on the line, essentially. And whatever you guys could come up with that we would need to look at. The project is, at my stress, your workspace at the moment. We haven't come up with any better name for it than that. Are we alone? We're definitely not alone. So this, while the project is influenced by something Bubbit Tables was showing at OSCOM last year that they were doing at Namely, which works similarly as this. Azure Dev Spaces and the Google Cloud Code as well is trying to do something similar, at least, and not too deeply familiar with them. That is essentially unless anyone has any more questions of how are we going to work, what I wanted to show. A lot quicker than I want to expect it. Any questions? No? Well, I'm good. Thank you, God. 30 minutes left. I'm back.