 I would like to give the word to Tom and after that, Malcolm. So, hello everybody, it's just working, cool, no. So, like I said, I'm Tom, this is Malcolm. We're from Grafana Labs and introducing you to Grafana Tanka today in your open source project we've been starting recently. So, who of you is using Kubernetes as its job? And who of you would say that they are really in love with YAML and it's super cool? Okay, basically expected that. We don't really like it that much either because it did not completely fit our needs. Because after all, YAML is basically data and it's laid down like data and it's not much more clever than a sheet of paper. You can effectively express things but they need to stay as verbose as they are and you can't use any of these cool tricks or fancy programming languages can do. Let's take a look at some examples why this won't work that well. So, for example, this is a deployment for just a single Grafana port and it's not configured or anything, it just starts the container. This is basically the equivalent of a Docker run command but instead of one line on a common line it takes like an entire screen of things and most of these aren't actually anything expressive at all. Like we have the word Grafana four times on screen. Why? And actually the only thing that's really unique about this here is the image but still we need to express the entire object which makes using it pretty slow. Another example, usually you don't deploy things a single time but instead in a dev environment in a prod environment and these are not exactly equivalent. So, in dev you would have the deployment, a secret to hold your data source or the connection to your data source and the config map and that's it. But for prod you probably need to alter that secret and you kind of want users to be able to use the source actually so you probably need a service in the ingress object as well. But what Yammer has to offer here is fairly limited because you can't just reuse what you have for dev for prod because even though these are the same or nearly the same for the secret you can't, the only thing you can do is duplicate the entire directory. Now you have to maintain two and if you're one of the cool people and want to be region aware and not have your entire production fail if one data center goes down then you need to maintain for US, for EU and the second one for you or any other region. So, you already have three directories of nearly equivalent files. Like there might be a diff of two lines, three lines between these but still you need to maintain all of them and as soon as you want to add something to one of these you need to propagate these changes to all of the directories which probably will be forgotten at some point so that you have severe drift in your configuration which makes maintaining hard and imagine being on call at 3 a.m. and having to fix something but first need to propagate all the fixes you've applied previously to other environments that have not been done yet. So, this raises the question what can be done about it? Well, at Grafana Labs we've looked around a little bit. What we found out is that basically YAML lacks abstraction. There have been previous attempts at adding an abstraction to Kubernetes like Helm did it with templating but templating didn't work quite out for us because it was too distant from what actually matters because we're string substituting on YAML and YAML is a pretty fragile syntax and it was pretty hard to read there was no language shooting available for it it really did not make the thing for us. So, looked around a bit further and found out that there is something called JSON from Google. It's basically JSON but with additions like it supports commands, it supports variables, it supports Helm fields and it supports a pretty big set of common language features which I will show in the next slides. So, probably one of the most interesting things is functions. They work basically just like you would expect a function to work. It takes arguments and it returns any type of JSON is aware of. So, maybe a dictionary, an array, an integer, a boolean, whatever. This can be used to reduce that big thing we have been seeing for deployment before to something that is nearly as concise as Docker run again. You specify the name, you specify the image and you get all of these filled in and it just works out. Also, another thing that makes it really somewhat superior to Helm is patching which is natively integrated into the language. So, say we had this function signature from before but I really, really, really wanted to add a label there but the function doesn't allow me to. This is somewhat equivalent to something missing from values.yaml and Helm. So, what can we do about it in JSONnet? Well, we can just format one of these patches so it just says basically we want whatever this returns plus we want the team label and we don't want to override metadata and labels completely but just inject that single thing in there as well. So, you can modify things down to three without affecting others which really is about the flexibility of JSONnet. So, if something is missing from the library the library doesn't let you done completely but instead you just format these patches afterwards. Another cool thing are imports. So, one big issue of YAML is that as soon as your file gets pretty big it's hard to keep up with it. Where is something defined? Why do it need to define things multiple times? And especially the issue we had previously with environments yeah, I have to copy it all the time even if it's the same thing. So, JSONnet supports imports. Consider a file called commonlibsonnet with this content and another one that imports exactly that file and takes the labels out of there. The JSONnet runtime then just copy paste practically the content into the correct location on the object. Another cool thing are packages. So, JSONnet is not only about having it all locally on your computer but also there's a package manager available so you can actually share your JSONnet files on the internet so that other people can make use of it. You can build proper libraries. They can be shared. Projects can build their own libraries which can just be installed and used. And at Grafana, we have done that. We have for nearly everything we use in our production environment, we have JSONnet Libs which contains for example, memcached but also there's a library for locally available. So, it's basically everything. We have modules for certain applications. We have also mixins that include Grafana dashboards and alerts and deploy together. They allow you to install an application to Kubernetes which is monitored out of the box which I think is quite cool. So, how can that be used? So, now we're going to give you a quick demo. So, I'm gonna talk it through so that it won't go wrong. Well, it will go wrong but you'll see it happen. So, here we have a K3D as a Kubernetes, simple Kubernetes infrastructure. We've shown that we've got a cluster. We've got an empty cluster with the default namespace. There's no pods in that namespace. So, what we're going to do is we're going to deploy Grafana with a sample dashboard. So, here we have the provisioning YAML that tells Grafana where to find dashboards. And now, we're gonna have a look at a dashboard. This is some JSON that I created a dashboard in Grafana and I downloaded the dashboard. It's just a random snippet of JSON that we're going to need. We've got quite a lot of it but we don't need to know exactly what's in there right now. Okay, so I'm gonna create myself a tanker directory and enter it. It's now an empty directory. I'm now going to initialize tanker. What this does is this goes off and downloads a couple of libraries that make Kubernetes much, much easier to use. So, you saw deployment.new. It's downloading libraries that provide all of that. For us. Okay, so now I need to connect the default environment to my own cluster. So, I tell it, go to the, I could make a few mistakes there, yes. So, connect it to the default context in Kubernetes. So, we're now set. Now, let's have a look at what we've got there. So, you can see we've got environments. We've got lib and we've got an event in directory. You can see the libraries that we're going to use. So, we're now just gonna copy those resources into the location where a tanker's gonna expect them. So, we're now ready to deploy Grafana. So, we start with this environments default main.jsonet. And the first thing we want to do is we want to import the library that's going to bring in the Kubernetes libraries. So, we'll start by importing something. It's a library that's currently called ksonet util causal.libsonet. And we add to that. So, that's the standard JSONet syntax. We're now adding a section of JSON to that. So, we're taking one thing, which is the library, and then another thing, which is a chunk of JSON. And we've got some local definitions. Those are just short hands that we're going to use later. And now, we've got a config object. The plus means it doesn't overwrite any pre-existing configs, it just adds to it. And we've set just the number of replicas. And now, we're specifying the image we're gonna use. Now, we create a standard JSON element called Grafana that's going to contain all of the resources needed to make our deployment. So, the deployment's going to involve a config map, a deployment, and a service, three standard Kubernetes resources. So, here we've got a config map that's going to be called Grafana config. It's gonna have a provisioning.yaml file, which is just importing the yaml file we saw earlier. And next, we'll have the dashboard JSON. So, if any of you know about provisioning, I'm being a bit naughty here. I'm mounting them both into one directory. But it just makes the demo simpler. So, now we've got our config map set up with those two files that we need that will be mounted into our deployment. See, I had to change, I was going to call it just container because it's inside the Grafana element, but that would overwrite my local at the top, which is why I had to quickly edit myself there. And I'm not gonna call it first demo, I'm gonna call it Grafana too. So, as you can see, the amount of stuff we actually need to specify about the container is relatively small. So, we specify the name and the image, and now we can use what's called a mixing. So, we're saying that Nu has created the container, but we're saying actually add on another thing that just sets the port or says sets the port. So, we just want it to listen on port 3000. Now, we create ourselves a deployment which is going to make use of that container. So, of course, the deployment's gonna be called Grafana, and now we're gonna say for the number of replicas, we're gonna go up and we're gonna use the config value. And the neat thing about that is another environment could set a different value for that. So, that's an easily over-rideable value, yeah? So, you can say certain things we know we're going to override, make them available for people to override, but some things we can't predict everything. So, even if we haven't made it over-rideable, you can just adapt it afterwards by adding some extra unit, for example, using a mixing or just adding on another bit of JSONnet. So, there we've got the basic deployment. We're adding this very simple, I mean, if you normally, when you do volumes, you have to have a volume and a volume mount, and it's lots of repetition. Here in one line, we just said add the volume and mount it, please, and just do the rest, and off it goes. And here again, service.util, for, service, for, that creates a service for the deployment, and that's all I have to say to make my service. So, now off we go, tank a show, we'll show us. So, this is a JSON representation of the YAML, but cube control can use JSON as well. Oh no, actually, this isn't JSON, this is YAML, sorry. And now we just apply it. So, it's shown us a big diff of everything that it's going to do. Of course, it doesn't exist yet, so it's gonna create everything. And there we go, it's created, and now off it's going, it's creating. Okay, and at this point I pause and back to Tom. Okay, so, okay, again, cool. So, what we've just seen, we've deployed Grafana, we're currently waiting for Grafana to come up, which of course has happened in the past, so we can already see it. It's working, we have it here, that's live, I can prove. It's pretty live, I can log in, and or dashboard is there. We've deployed using 30 lines of JSONnet instead of a bunch of files of YAML. And the whole thing was done in about seven minutes, so that strikes me as cool. So, to recap what you've just seen and what was really striking about it. First, we've been able to use JSONnet to reduce this thing, this big, big, big deployment, to just four lines. And if we could have even written that at one line, only that it wouldn't have been fit on your presentation then, but hey. So, we are back to the usability of the Quran, basically. And the other thing is, even if the library authors haven't been able to include every single eventual thing you might want to do at some point, you're still covered, because you can just add some patches on top of it. So, for example, I don't need to copy my entire YAML files anymore, only for Prod. I could format a library, libGrafana, or however I want to call it, import it. Import it. And I only need to change the ingress in this example, so I just say, it's all fine, I just need to add my Prod host to the ingress, and I'm covered. So, what's to do next? If somebody thought this is a cool thing and is interested, go to httpstanker.dev, follow our tutorial there, it really walks you through and shows how to use it in depth. And if you think it was nice, tweet at Grafana on Twitter and tell us how your experience was. One thing, this is the stark history of the project when we announced it to Hacker News. It really blew up, but in the end, now you can see it's kind of getting back to horizontal. If possible, we would like to change that again if you liked it, maybe give us a start at GitHub. Thank you. So, we did have some time for questions. Hi, first of all, congratulations. So, let's assume I use Helm currently, and I saw this, and I fell in love. How can you stand with Helm? So, at the moment, there is no native Helm integration in Tonka, but you could do basically two things. If you just wanted to get started, you could use the Helm template command to render YAML out of your Helm and rewrite that gradually to JSON.net, or you could contribute to the community, take your Helm chart, understand how it works and write JSON.net library for it, publish on GitHub and make other people happy. Any more questions? Hey, does it do any validation or dry runs? Slava, go ahead. I, sorry, but I can't hear you, currently it's coming out. Does it do dry runs or validation? Oh, dry run, sorry. Or validation. What was the question? Okay, yeah, so there is the TKDiff command, which basically works like GitDiff. It takes the YAML and shows exactly the differences to the cluster, and it shows you exactly in beforehand what will be done so you can really, you change something and you can make sure that it actually does what you want to. Any more questions? Any more? Yeah, we have one. Where's that, I don't see. Where's that hand? There, and Tom. Didn't the Ksonnet team abandon the project? Yes, Tonka is basically the spiritual. So, didn't the Ksonnet team abandon the project? Was the question? Yes, so yes, the Ksonnet project is abandoned. Tonka can be thought of as the spiritual successor to Ksonnet, and we're in the process of starting to maintain everything that's still required for our ecosystem, including the Ksonnet library, which will be maintained as part of Tonka in the near future. Why did you call it Tonka? This, the language is called Jsonnet, and Sonnet is the form of ancient poetry, and Tonka is as well, or maybe it's even current poetry, I'm not sure, but we thought that TK was a nice command to type at the command line because it's short, it can be typed with two hands, so it shouldn't take long time to do it. Does Tonka maintain local state from developers' laptop, like in Terraform case, for example? So, does Tonka maintain a local state? No. So, it generates YAML, but it hands directly to cube control, which then talks to Kubernetes. This is probably interesting as well. Tonka directly shouts out to cube cuddle, so there's no, it's not behaving any differently. It's just a stage before cube control, or cuddle, or whatever you wanna call it, and it just uses it so it's exactly the same to your Kubernetes cluster. So, the question is, is there any configuration control stored other than what is in Kubernetes? All that Tonka does is generates Kubernetes resources in YAML, which then gets shipped to Kubernetes as any other resources get shipped to Kubernetes. So, it's all before the cube control command. It's basically a very fancy way to get YAML, instead of having to write it by hand. A cube cuddle includes a diff command, which is used by us, so we generate a YAML and pipe it to cube cuddle diff, and cube cuddle then shows the differences. So, last thing, if anyone is interested to talk more about Tonka, please come and find us at the Grafana stand. Thank you. Thanks.