 Hello folks, my name is Stefan Prodan, I'm a Principal Engineer at WeWorks and I'm very happy to talk to you about Timoni, a new tool I've been developing this year. Today's talk we've covered what Timoni is, how I got to build it and how I made a flux distribution with it for having flux managed bare metal clusters and other use cases. Okay, so let's get started. What is Timoni? So this tool is a package manager solution. It's a statically built binary. It's written in Go, you can run it on any operating system, it has no external dependencies. And Timoni builds upon three relies upon three technologies. First is QLang. Everything you'll be doing with Timoni, describing app deployments, orchestrating the actual delivery on clusters, all these things are files on this declarative way written in QLang. Second, Timoni relies on the open container initiative standards. Most precisely, Timoni works with OCI artifacts and this is how applications are being delivered to end users and distributed to container registries as artifacts. Lastly, of course, Timoni is all about Kubernetes and especially relies on the Kubernetes server-side applied APIs, which means Timoni is very close to flux in terms of how it interacts with the Kubernetes API cluster. It basically uses the same engine beneath. Okay, so why Q? Why Q and Timoni? I discovered Q a couple of years ago. I was at KubeCon at the Flux C and CF booth. And the Flux user came to me and told me about, hey, I've been giving up using Customize and Helm with Flux and now I'm generating all the Kubernetes YAML using this new language called Q. And he showed me a little bit of the setup he built and so on. And I was really, really excited. Went back home, tried Q. It was quite hard at the beginning to understand and change a little bit my mindset around how I'm composing Kubernetes objects. But in the end, I really loved it. And Q has some really neat features, which work great if you want to generate any kind of config, not only Kubernetes, but especially for Kubernetes, there are some nice features in Q which makes it so appealing. For example, Q is all about type safety. You don't have, like if you want to create a deployment and have some kind of templating into it so it can change fields. Unlike Helm, which uses a text template or Customize, which uses JSON patches, with Q you can actually import the Kubernetes deployment schema, for example, from the Kubernetes Go APIs. And when you write the deployment itself, if you make a typo in a field or if you add a field which is not there in the schema or in the wrong position inside the schema and so on, Q will actually let you know about it before you even produce the final YAML. So in a way it makes defining Kubernetes objects safer. And once you generated an object, you are quite sure that that thing is actually what Kubernetes accepts. Another nice feature of Q but quite challenging if you come from the Helm and Customize world is about imitability. And what that means is that once you set a field, let's say you write a deployment in Q and inside the deployment you say replicas 2. Later on in other parts of your Q module, you can't change that value anymore. If you set it to a concrete value, like I want to replicas, you can later on change it. And that could be quite annoying at first. Like why wouldn't I want to change it? Well, the imitability helps creating same configuration when this type of configuration is complex. You don't want to, if you are debugging a Helm chart template or some customized overlay. And you want to know how is this replica value set to this particular value. It's like you have to go through all the patching layers or through all the Helm if conditions and figure out where exactly is this set to this different value. With Q, if you set it once, if you try to overwrite it, if you try to patch it, Q will not allow you to do that and to say, hey, this field has already been set. So in a way, it makes you think differently about how you package your configuration and what things you allow your end users to change. And in my opinion, this makes for a better model of distributing app configuration to end users. And it makes you create a particular structure where you are defining all the inputs, all the things that you accept from users and you have good defaults. Q-lang also has this defaulting system in place and you only work with these two concepts. You no longer modify something endlessly until you get to the desired state. Okay. So who is Timoni for? Timoni is for two types of users. One type are the software vendors, the people that are building the software and they want to distribute it to their end users. So these are off-the-shelf software, open-source software made by maintainers, also software packaged by platform engineers and so on. So Timoni allows software makers to package and distribute their app deployments. On the other side are the Kubernetes user, developers, operators who want to use an off-the-shelf software or some open-source software, as described by the creator, and also be able to customize it and, you know, compose it in such a way that it fits with the environment where you are deployed it. So what makes Timoni a package manager? What are Timoni's main features? They are all about applications, how you define an app, how you distribute the app, how you compose microservices or dependencies and make your app as a whole before you deploy it. And of course, it's all about application lifecycle management, how you install it, how you upgrade it, how you run end-to-end tests, and how you uninstall it and so on. So app definition and distribution, app definition from a Timoni perspective is all around queue and queue modules. So a Timoni module, which is the equivalent of a HelmChart, is an automated queue module with a particular structure. And if you look here, I'm guessing you'll find the structure quite familiar. It has a templates directory where you define all your Kubernetes objects there. And it has a value skew, not a value skew, it's a value skew where you define the defaults and so on. This is how a module looks like. Timoni offers commands for working with modules, how you create one from scratch, how you verify it, how you test it on your own cluster before you distribute. And it has nice things like you can apply it to the dry run, see what changes, how they are reflected on the cluster, and stuff like that. When it comes to distribution, as I said, Timoni works with container registries. So similar to Docker, there is a push command and a pull command. You also can log in and log out private registries. And it's also integrated with cosine. When you distribute a module, you can also sign it keyless or with a private key. App composition and lifecycle. So here is a little bit where Timoni is very different from Helm or Customize in terms of app composition. App composition for Timoni is done in a declarative way where you have this bundle object, which is just a Q file on this, where you can stack multiple instances. And those instances can use different modules. So, for example, let's say your application needs a cache server. So you could bundle a Redis deployment with your app or some other microservices and all together which have common values need to be deployed as a single unit. So, Timoni can act on bundles as they are a single thing. And another important aspect here is that when you configure your application, some configuration values you can provide inside the bundle file itself. But some values like an API token or some password or some DNS record, those values could be on the actual cluster and you don't want to put your secrets and all these dynamic values here when you define the app in the bundle. So, also, Timoni has a thing called runtime values when you can query the cluster right before you deploy your app, extract those secrets or those dynamic values and use those when you do the configuration. And this is how Timoni avoids having the issue of keeping secrets in a single file. So, what type of operations? So, unlike Helm, Timoni does not have install upgrade. It's a simple apply command that takes the bundle file and it figures out if it has to do an install or it has to do an upgrade or it has to run all the tests and so on. You also can do a dry run apply to see how your local changes, what those values mean on the cluster. And you can also see a life diff between the cluster state and what you want to move to the new desired state. And bundles can also be distributed in container registries and so on. So, last thing about bundles that is also besides the runtime values that I talked before, there is also a feature in the bundle where you can tell Timoni to deploy an app across multiple clusters and multiple environments. And inside the bundle you can customize the app configuration based on the target environment. So, with a single file you can drive deployments to multiple clusters and Timoni does this as a step-by-step process. So, for example, if the new configuration fails on staging, Timoni will not move on to production and deploy those changes. So, in a way you can do promotions with this future and rollbacks and other interesting things. Okay, so we got to the Flux IO part. So, what is Flux IO? You may be aware what is Flux. Flux is a continuous delivery tool which allows you to implement GitOps. It has all these controllers, a source controller, which is specialized for getting definitions from outside the cluster, like Git repos or S3 buckets that contain Kubernetes YAML. And there are other controllers like CustomizeController, which is a controller that knows how to apply CustomizeOverlays and PlainYAMLs on the cluster. And that's how you deploy things from PlainYAML. And we also have a HelmController in there. So, you can orchestrate Helm operations in a declarative way. We don't use the Helm CLI. This controller uses the same Helm SDK as the CLI and performs all the Helm actions on the cluster as Helm itself. Now, the main challenge with the upstream Flux and how it gets deployed is the fact that all these controllers run in their own Kubernetes deployments. There are different pods. And if you have a bare metal cluster with no CLI on it, so everything is, all the nodes are not ready and so on, when you try to deploy a Flux CLI with Helm charts, you'll see that Flux will crash loop on the cluster because it can't talk to the other controllers. The readiness probes will fail and so on. So, in order to allow people to deploy Flux on clusters, which are not quite ready, I have created this Timoni bundle called TimoniModule called Flux IL. There are other modules in there which basically have all the Flux controllers running as a single unit inside a single pod. All the communication happens on the loopback interface and it runs the readiness probes, the readiness probes are exposed on the host network. So, Cubelet can actually see that Flux is running and it will let it run on the cluster, which is not in the ready state. There are other things that you can do with Flux IL, for example, it's fine-tuned to be run on edge cluster with limited resources. It has a security first approach. It will only do HTTPS traffic outside the cluster. It can know HTTP communication happens between pods because it's a single one. It works great on serverless clusters like EKS Firegate or GK Autopilot and things like that. Let's see how we can install Flux. Like all things in Timoni, you create this bundle file and you give it Timoni, it applies it and that's how you create a deployment on the cluster. This is how the Flux deployment looks like. You can fine-tune here a bunch of things, you can enable, disable Flux controllers, you can run it on the host network or not, you can enable multi-tenancy, you can set up proxies and a bunch of things. So, it's very rich in customization. You deploy Flux but then your cluster is still in not the ready state. Timoni also comes with a bundle for orchestrating hand releases with Flux. So, after you deploy Flux, then you can tell Timoni, hey, configure Flux to deploy some CNI and so on. So, other cluster add-ons and all of that. And multi-tenancy, there is a Flux tenant module for Timoni which allows you to onboard and off-board these retenants, set up resource quota for them, set up restrictions in terms of RBAC, what they can do on their clusters. And of course, onboard the Git repo stories where all these different teams, tenants can mean many things but let's say a tenant is a team that you don't want to give them full access to the cluster and restrict them to one or more main spaces. With Timoni, you can create all the RBAC things that Flux needs on the cluster and then onboard in a safe way all the Git repos belonging to a tenant. Okay, that's enough talk. Let's see how this actually works. I'm going to demo now Flux IO on a cluster. So, I have a cluster running here. Let's look at the nodes. So, I have a kind cluster. The single node that is running is not ready because it has no CNI installed. And now I'm going to deploy Flux using the Timoni module. So, let's look a little bit how this, how the file looks like. I'm doing that bundles Flux IO dot queue. So, this is the definition and I can apply it with Timoni bundle apply minus F and I'm going to give it this definition. So, I'm running this. What Timoni does, it pulls the definition from GitHub Container Registry where I have published the module and it quickly has deployed Flux, all the custom resources, the deployment, the controllers and so on. So, now I have Flux running on the cluster. It has worked very fast because I have preloaded the images, so we don't have to wait for Kubernetes to download all the Flux images. But if we now look at the pods in this cluster, we see that we have a pod called Flux with four containers inside. These are all the Flux controllers. And we also have core DNS, which is currently failing and it's not ready. But Flux is running and it's ready for me to provision the cluster with the CNI. So, what I'm going to do now, I'm going to extend my current bundle with the cluster event. Let's look how this looks like. So, we have the Flux instance that I previously created and I've added a new thing called Cilio, which uses this Flux hand release module and sets up the Helm values, where the chart comes from. I'm telling Flux to keep Cilio up to date and Flux every hour. We check, oh, is there a new Cilium version? Let me upgrade the cluster and so on. Okay, but let's apply this and see how it goes. So, now I'm reapplying the same bundle and what T1 is doing, it will check, oh, is Flux ready? Do I have this prerequisite? Everything is fine. And now it has went to the Cilium instance. It has created, generated on the cluster a Helm repo story for Cilium and Helm release. Then it has told Flux, hey, begin and deploy all of that and it waited for Flux to acknowledge that the Helm release must be installed. Let's see what's happening now on the cluster. If I'm doing get pods, seeing that Cilium has started and it has created the Cilium operator, the Cilium demo set and so on. At some point, I should be able to see that the whole cluster comes up to life. Yeah, so that was the quick demo. If you like what you saw, please go to timoni.sh. There is a Flux IO documentation there. You can play with it. Yes, and I encourage you, if you try this new tool, if you try Cilium, please let me know. Open an issue. If you have any kind of problems, reach out to me on Slack. I'm on CNCF Slack. Thank you very much.