 Thanks everyone. Thank you for joining me for the last talk of the day. Hopefully we still have a little bit of energy. Um, it's kind of funny. I mean, I'm sure the program committee obviously scheduled things that the way they scheduled it, but Andre, it's a fact to follow Len. It's going to be interesting because this is a very different take actually on a very similar idea. So hopefully if you were here for the last talk, you should be able to take some more away because of that. Um, but anyway, so subject to this talk is net reap bridging the gap between silly manomad. My name is Dan Norris. I am currently the infrastructure lead at a company called Cosmonic. And so the agenda we'll kind of walk through here is sort of a overall description of the problem. Like why did we go about doing this in the first place? Um, and I'll talk about kind of why we ended up deciding to use nomad in the first place for what we're doing. Um, and of course, why we chose psyllium. I mean, I think some of that self evidence since we're all kind of in the same room, but we'll get into it. Um, then I'll give an overview of a net reap itself. Um, what it is. It's actually just a re-implementation of the psyllium operator. Uh, and then I'll give a quick demo of just to kind of see what it looks like in action. Um, but the core problem that I'm trying to solve with this particular technology is, um, I work at Cosmonic. We are a platform as a service. It's actually a web assembly. We'll dig into that. Um, but the main problem that I wanted psyllium, uh, to be used in kind of our backend is we want to be able to secure customer traffic. Running a platform as a service is a very different and effectively a cloud platform at that. Um, is a very different set of problems and requires a very specific set of technologies. And for us psyllium is one of them. Um, and specifically what I wanted to be able to do and what my team wanted to be able to do, uh, was to be able to secure network traffic, um, specifically on nomad. Uh, for those who are not familiar, uh, nomad is basically just another orchestrator. It's kind of like Kubernetes but HashiCourt flavored. Uh, so a lot of HCL. Um, there are ways to actually secure workloads on nomad out of the box. Um, it's called console connect. It's kind of similar. If you squint hard enough to something like an Istio, it's a service mesh. Um, you know, it does a lot of L seven sort of stuff, you know, MTLS, all that sort of thing, but it's really more geared towards, uh, securing microservice to microservice traffic, which works if you're operating on behalf of a single company, you know, single set of customers doesn't really work if you're actually running customer workloads and like a ton of them and you don't want them to interact in any way, shape, form. Uh, so for us, it was really came down to how do we restrict traffic based on attributes or for services that we have no idea what they are. Um, and again, like running a platform as a service is a kind of a very different, um, set of needs than necessarily what a lot of companies do have. Uh, and so we specifically chose nomads to give a little bit of background, um, because it is actually simpler and a lot more lightweight to run. You don't need at CD, um, you can actually deploy the whole thing, um, as single sets of binaries. They'll actually cluster themselves together. If you can figure them correctly, that's relatively recent. Um, but one of the really big things for us is, um, it's a, it's much more flexible in the way that it approaches, um, scheduling and understanding what workloads are. Uh, for example, out of the box, like you can obviously do do Docker, um, which is what like 90% of people do. Um, but you can also actually schedule like Java jars natively. You can do QMU virtual machines. Um, you can just run arbitrary, um, binaries that happen to be on the host system with fork exec. Um, and the really attractive part for us was that it makes it easy to write what they call a task driver. Um, so in quick bit of nomad parlance, um, a task is unit of execution. So it happens to be like a container. Um, a task driver is what actually, um, tells nomad how to provide that isolation that you need for any given workload and just do the setup and tear down for a task. So like super simple. If you've ever done that with Kubernetes, you know, like, it's not really designed for that in the slightest. There are projects that help you do it, but with nomad, it's really easy. Um, as I mentioned a couple of times, right? Uh, I work for a company called about Cosmonic. We're a platform as a service for building distributed applications using web assembly. Um, I've actually spent most of my day at wasm day for that reason, which is why I was a little hard to track down. Um, but because we run such a unique platform, which is web assembly on the backend, um, and almost the entire product that we have actually built and what we are hosting on behalf of customers, um, is built using a CNCF project called wasm cloud, which we are almost incubating. It's pretty cool. Um, but we also use firecracker virtual machines to be even more paranoid about it. This is not a web assembly talk. Like there are ways to do isolation. Like that's kind of the whole premise of web assembly. There's a reason we're actually doing that also with firecracker. Um, it's mostly due to paranoia. Um, we want to make sure our customers are as, you know, safe as possible and don't end up trampling over each other and are fearing with us or with, um, other customer code. Uh, but we also want to ensure file system and network isolation completely. So give you kind of a diagram of what's going on here and what kind of our backend looks like. We've got nomad floating around as the orchestrator. Um, it's sitting there and basically like behaving like a Kubernetes cluster would, right? Scheduling jobs and tasks. Um, but they happen to actually be firecracker virtual machines, which are wrapping our customer code that are delivered as, um, web assembly or in this case like wasm cloud actors or providers. That's the stuff that like we are running on their behalf. And so, you know, a lot of this is really isolated, at least the compute level, right? There's no real tampering. There's no way that, you know, in theory and probably in practice, there's, it's pretty hard to break out of a KVM virtual machine, which is what firecracker is. It's even harder to properly break out of the web assembly boundary, given just how strict that happens to be in the, some of the guarantees we get with wasm cloud. But if you notice there's this big cloud thing over on the right hand side of this diagram, which is like just floating out there in the wind, right? Uh, nomad does support CNI, but not really mentioned on this diagram is any of this ingress traffic. Like how do you actually manage that when you're running customer code that can make arbitrary network requests? Um, and obviously the answer to that psyllium, I'm sure you're all real shocked being in this room. Uh, and, um, I mean, there's a number of reasons like why we decided to choose psyllium. I'll get into that in a little bit. Um, I mean, you've all drank the Kool-Aid. So, you know, probably could guess. Um, but one of the challenge or there's a number of challenges when you're actually trying to run psyllium and nomad for one, there's not a lot of documentation on how to even use the CNI in the first place. There are very few, I think almost none that just will work out of the box. I mean, obviously the protocol is there, but in terms of like guides or like how to use nothing. Um, the other problem is actually on the psyllium side. So I think psyllium a long time ago made what is probably the correct choice to pretty much only in our operate with Kubernetes for the most part, right? Um, and the, I mean, you can run all the stuff and I'll tell you how we do it. And in a bit, um, but there was one thing that we really had a challenge with and we effectively had to implement our own and that's the psyllium operator, which is actually what's running your cluster and doing a lot of sort of the bookkeeping behind the scenes. Um, it's kind of like making sure you don't run out of IPs. It handles distributing your network, your endpoint labeling. Um, it also distributes your network policies. And so if you want to replicate that, and I think we saw a little bit of this in the last talk, actually, um, you have to interact with the lower level APIs for actually to actually be able to use these features. Um, the other challenge is actually on the nomad side. So typically in Kubernetes, when you are running a CNI plugin, um, you know, it's nice. You can actually like run that as a deployment or a demon set, probably a demon set. If you're saying, um, you know, things just run, you manage it like you would any other workload in Kubernetes. That is not the case in nomad, which is really annoying. Um, what nomad does when it starts up as it undergoes a process called fingerprinting, it basically generates, um, a static list of all the things it can do. So it basically, um, coalesce all the task drivers it has available. It figures out what OS it's running on. And in our case, it figures out what CNI plugins are available to it and you cannot change that at runtime. So it has to be available. You can't manage that, um, natively in the cluster. It was a huge pain. Um, so we did is just, uh, effectively picked apart the, um, demon set that actually runs the psyllium agents typically in a Kubernetes deploy. Like literally just looked at, um, what was kind of there. Um, there was some, also some clues kind of in the open source community as people who were trying to run a nomad and we ended up just implementing a big system, G, um, job that just runs it as a Docker container and just kind of, we put that on our, on our, uh, uh, bare metal hosts and it just works. Um, I won't go into the details cause there's a lot of details, but I mean it's pretty scary, like all sorts of capabilities being added, right? A bunch of arguments. Um, if you've worked with it at this level, it's probably nothing that surprising, but it took a little bit of work to kind of figure that out. And it's just kind of this is normally what is taken care of for you, which is typically pretty nice in a Kubernetes cluster. Um, another thing that we had to do, and this is more of just, uh, something that affected us because we were running firecracker. And I just wanted to bring it up since this is a pretty network-centric audience is, um, firecracker expects, I mean it can do CNI, but at the end of the day it actually wants a tap device. So AWS, um, which, and that's their open source project actually is firecracker. Um, they actually have a, um, CNI plugin. You can kind of jam at the bottom of your CNI config, uh, that basically redirects everything to a tap device. And surprisingly, that all just worked like it, I was shocked. I had no idea it was going to be that easy. Uh, so just in case you were wondering, you can get psyllium through a tap device. Who knew? Uh, so our solution, um, if you couldn't figure out from the title is the thing called NetReap. So, uh, I'll get into what the name is, where it came from, but it is open source. Um, we have a project website. You can find in it on GitHub. Um, and as I mentioned, it is effectively a nomad specific psyllium operator. Um, pretty lightweight, um, because, you know, we aren't going to reimplement it completely everything. Uh, and so we originally chose the name because our primary focus and goal is to make sure that we were, uh, cleaning up all of our old IP allocations and we ended up keeping the name. Cause like, come on, did you see how metal that logo was? Like, why would we change that? Uh, our main, so the main responsibilities of the binary, um, specifically is a few different things. Um, cleaning up all of the old end points. So we make sure that those don't stick around. Um, also cleaning up all the nodes that happen to be removed from the cluster. Again, typically that's something that the operator does for you, but we had to re-implement that behavior. Um, what it also does that, which is pretty cool, um, is that it also syncs all the network policies for all of the hosts that, um, where psyllium is running. So that way we can update it kind of dynamically and things just work. And it's pretty nice. Uh, and it also applies on the metadata that we would want to all the various end points where the workloads happen to be running. Um, a lot of this is made possible actually through psyllium's console support, through the KV store abstraction. Um, we can, and as we discovered, or actually we thought about this, but you can also use that CD. Um, we chose not to because we didn't want to run yet another state store. We actually already running console. Um, and many other nomad deployments typically will use console because that's kind of how it was originally intended to be used. Um, nowadays actually you can just run it standalone. You don't need that requirement. But again, since most people use it, we felt that was fine to take the dependency and mostly that it, is for, um, storing psyllium state, but also distributing the policies, uh, quick refresher on end points. So single cluster manages generally a single subnet. Like, I mean, there's all the cluster mesh stuff. We don't really mess with that. So we're kind of a little bit more stock of a psyllium deploy. Um, so in our case, for example, like we would allocate a subnet in the 172 2016 range. So, you know, that gets you 65 ish K, you know, ish, um, usable IP addresses, which is not a small amount, but it does mean that it's something we want to keep an eye on. Uh, we would have done IPv6, but there's some, it works in firecracker, but the SDK doesn't really support it. So it's kind of a pain. So we're just like, whatever, we'll just do IPv4. It's fine for now. Um, so net repo monitor for all of the agents, which are effectively the cubelets coming up and down, um, and we'll remove any old allocations and, um, just remove the, uh, old, uh, nodes. That means it doesn't need to keep track of anymore. Um, and it will, in that way, we basically don't need to worry about any of the health checks. You don't need to ping nodes that are never going to exist again. Right? There's no reason to do it. And that aspect of it is actually leader elected. Um, all the endpoint allocation stuff is per, um, agent or per server, right? But only one of them really needs to kind of do the bookkeeping for all the nodes. So we just do a quick leader election, uh, using the built-in support and console for that. Uh, and I think the last presentation touched on this a little bit, right? Um, but in case you're not a hundred percent aware of Sillium policies are actually stored as one big JSON blob and that continually gets updated. Um, so normally the Sillium operators sort of like does all the distribution for you. Um, but we need to replicate that. So, uh, NetReap effectively just puts a watch on a single KV file and you just kind of like jam a whole big JSON policy file in there. And anytime you want to make a change to that, that just gets replicated to all the nodes and then your policies are updated. Um, there's some weird issues there, but it's for the most part it pretty much just works fine. Um, so to demonstrate, right? So here's just a really quick example. I think I just pulled this off the policy generator, by the way. Um, so it's just just really lightweight network policy, right? So I died three that's, um, matching a backend to a front end. That's really all it's doing. The JSON representation of that is almost the same, right? The only difference is that the, uh, the name basically is just kind of just another label, right? But everything else sort of flows from there. But imagine on a particular, uh, server, right? That happens to be a psyllium agent. If you have a ton of policies, these gets pretty big pretty quickly. Um, it's also very different for us because we tend to hand, or actually, I don't even tend to, we hand write all the JSON for the policies. Um, so it's a little bit different than experience than, um, I imagine most people in this room have with operating psyllium. Uh, so to recap, like, why did we go through all of this? Like why did we invent all this stuff from scratch and end up using this whole orchestrator that no one else uses? Uh, so the reason was at least because we were running this platform, um, we want to be able to connect customers to individual virtual machines and manage their traffic individually. And because psyllium, you know, with Hubble and all the tooling, we get all an unparalleled amount of insight into that traffic. Um, obviously EBPF makes it efficient. That's like half the reason to run it. Um, and also we get the ability to apply individual policies per endpoint. If we want based on tags and labels. And so all the advantages of psyllium, we wanted to bring that to nomad and our platform in particular. Uh, so now it's demo time, you know, rock on. Uh, so what I'm going to do is I have a server that's already set up, um, actually a native us that is just running a single node nomad server. And it's actually a joint serving, um, an agent or effectively a client. And so what I'm going to do is just come and turn on net reap, which is actually this one. Um, and then I'm going to just deploy just a very small sample job and show you what the end points end up looking like. And a little bit about the config and the machine. So I don't, has anybody actually used nomad in here? I realized you should have asked that. Okay. Handful of people. Um, so if you ever use Terraform though, this should at least like the language should at least look familiar because this is HCL. Um, all nomad jobs are defined in the HCL. Technically they're actually are JSON. They get converted, but this is how normal people end up writing them. If you don't love giant piles of JSON. Um, so in here, the only thing we really have to configure in most of this is all boiler plate. Um, the only thing we really have to configure in here that kind of, um, is of interest to this audience is the cider that, uh, net reap happens to be managing. So in this case, it's just going to be a 170 to 16 slash 16. Um, then of course the image and, um, the other big thing is you have to mount in the psyllium socket so that it's able to communicate with psyllium API. So, uh, I will go ahead and just apply that on here. So I'll do a nomad plan, net reap, right? Go ahead and run it. Well, there we go. It's just the bottom up and doing its thing. Um, and so the example app we're going to run is actually just a slight modification of, um, actually with the nomad one we'll do out of the box. If you just like, you know, nomad in it. Um, so in here just to demonstrate some of the capabilities, um, like in Kubernetes you can apply arbitrary key value pairs in this meta block. This can be repeated. So in this case I'm just applying, uh, effectively a label called Cosmonic.com slash app name and the value of that is psyllium.com North America 2023. Um, and again, I'm not going to go through a lot of this, but, um, of interest here is the fact that we are actually running it as a CNI called psyllium. I've reconfigured the config for that. I can show it to you in a second, but I mean, it's basically the stock, um, psyllium like Confless that you'd normally use. And that's really it, right? And the other thing too is also like, uh, in theory, you have to, if you're depending on how you're keeping track of your IPs, you also give it like an address mode of it's per allocation. That's very, um, nomad specific. So I'm just going to leave it at that. I'm happy to talk about that, um, outside of the scope of the presentation, but I'm going to go ahead and get that started as well. So do not plan and we'll do example.nomad get this kicked off. It takes a little bit because it's got a, um, actually configure itself gets CNI all kind of good. Usually it does take a little bit, especially if you don't actually give it a health check kind of sits there and just make sure it's making progress. So there we go. So now it's healthy. Um, so let's do a psyllium status real quick. If I could actually type the word psyllium, which I cannot auto my history to the rescue. Um, so this particular, uh, psyllium cluster, right, is really just one node. It's just happens to be the node that it's running on. Um, but it is connected to a console server, which is also actually running in the same node. Um, you know, scooper denies is very much disabled. Um, I'm running a pretty recent version of psyllium. I was playing around with like bandwidth manager and stuff on here at one point. Um, and then why regard of course works is how we run it. Uh, but you can see where is our, we have a couple of IPs used, right? So let's go check that out. Um, if I just do end point list, like what we do normally, um, it's a little compressed on this view. Um, it's still readable. Uh, but anyway, so we have just, you know, normal health check and the, um, what's reserved for the host, right? But this is our running container, right? So you can see it pulled an IP address from the range we preconfigured. Um, and then all the labels are actually set up. So net reap itself will set up three specific ones out of the box, right? They're prefixed with, um, the net reap source. In this case, it's the job ID, um, the namespace that happens to be running in and then task group. Um, nomad jobs can be comprised of tasks groups, which can be comprised of tasks. So that's all metadata that happens to be useful, um, kind of in the general case, but also you can apply your own. So that's prefixed with the nomad label. And that's that same label that I had in the job, uh, definition before, same con North America 2023. Um, so that that's basically it running. And so, uh, I do have a policy set. Um, I'm just going to go get that. So, oh no, scrolling, that doesn't really work well. Anyway, the policy itself is not that important. I think we all know that silly and policies work and they do what they're supposed to, um, but I'm just going to apply a new one and I can show you that. If I do, um, if I actually update the policy that happens to be stored at this particular key, uh, net reap.io net reap policy and console. Oh no, it's cause I moved it. There we go. So that did actually update that key. And if I do another get, right, this is revision nine, it was revision eight before net reap, because it had a watch on that key automatically read that. And if we had more servers actually joined in this cluster, it would have just written that all out, um, to all of them at once. So that's pretty neat. We basically, for the most part, reimplemented everything that we needed in order to get net reap to work out of the box or not net reap psyllium. Uh, so future work, we've got a couple of things that we'd like to do. Um, we don't want to be able to document, um, some more specific nomad features, uh, specifically ACL support. Um, we also would like to investigate what it would take to, um, generally recommend at CD on nomad to replace console. It is, I mean, there's a helm chart, right? If you squint hard enough, you can convert that over into like a nomad spec. Um, but we have had reports from people who are, are not just us who are running, um, net reap and, um, psyllium and production on nomads saying that, um, at a pretty high scale, especially with the way that we're kind of putting watches on a single key, but even many keys, um, that's where console tends to somewhat break down and at CD is much more efficient. Um, I mean anecdotally from what I was told, these are people who are running like tens of thousands of containers at once, turning like gigabits of traffic at like, I don't know, across like tens or something of machines. It's pretty significant, right? So I think we'd want to be able to actually have some concrete recommendations for people running at that scale. Um, other thing I want to do is also be able to break up policies into multiple keys. That way you can behave, make it behave a little bit more like network policies are in Kubernetes. You can just kind of define the ones that you care about for a particular workload, right? And let net reap kind of take care of distributing that and making sure they end up where they need to end up. Um, another thing that would be great would be bandwidth management support. Um, that I believe it depends on a label in Kubernetes that ends up sort of percolating throughout the rest of the system. I did some investigating work on that like several months ago. Um, didn't get that far, but it's something that we would definitely want to be interested in, um, being able to implement for ourselves and being able to use. Um, and then lastly, like we'd love to be able to contribute upstream. Um, the reason that we went about this in the first place was just, it felt like a really big lift for us to write a different version of the Syllium operator and kind of have that actually upstreamed in the project. But that's not to say that we wouldn't be willing to do that or find other ways to kind of make this more of like less of a cosmetic specific effort and more of a community effort. Um, it is obviously open source. So it's not like anybody can't use this. And again, people are besides us, but I think we'd be really interested in seeing how other people who are running Nomad would be interested in running Syllium and getting some more, uh, like interaction with the community. Uh, I wanted to quickly shout out to, to Taylor who's still here. I see, uh, for helping me to write this in the first place. Um, also Dan Everton, I believe he's a go daddy. We wrote like half of this to make it more efficient and better. Um, which was pretty cool. Like we were shocked that we got anybody using this besides us, honestly. And of course everyone else who's contributed, there are a few other people and, um, certainly internally and externally who have, um, helped us out. So with that, um, a couple of resources. So if you want to talk about this more, um, we have a discord that we run specifically for a product, but it's also where our net read discussion happens. Um, if you want to learn more about wasm cloud and sort of like the underlying stuff, um, there's this QR code, which takes you to your Slack. Uh, and of course the project, uh, website is over on GitHub, uh, Cosmonic, um, slash net reap. So feel free to check it out. I mean, we're pretty responsible. I, we're mostly pretty responsive to issues. I try, right? It's hard when you're a tiny startup, but I'd love to hear what other people would be interested in using this for. And certainly contributions are more than welcome. Uh, with that, any questions? Uh, nice talk. I think, thanks for presenting this like super cool to see, um, you know, how people can kind of pick up psyllium and then like recognize what the value is and just be like, Oh, well, this little piece that doesn't work. So let me just like substitute in. And I think we saw that in the previous talk as well. Yeah, basically. It's like, awesome to see. Um, uh, yeah. When I was listening through, I was the main question I had in mind was actually like from a psyllium community perspective, how can we facilitate the stuff? Um, obviously if there's like you and other members of the community who are all interested in maintaining the stuff together, um, certainly happy to like facilitate some discussions about where, you know, where should that live? And are there better ways to contribute and so on? I think one big thing that has changed upstream in psyllium, probably over the course of while you've been developing this stuff, um, is we've kicked off like a major modularity effort in terms of the Golan structure of things like the psyllium agent and as well as the operator. Okay. So I think some of the early, maybe struggles you might have had is like, oh, everything just says like, you know, well, it doesn't even say if Kubernetes, it just assumes Kubernetes, right? And I think what the operator, like the upstream operators doing now is, it's a lot more modular based on this like high cell model, which have documented documentation. So, um, in terms of being able to enable some functionality, um, and then hopefully extract that, there might be ways to share more with network, but like, I don't know. I haven't looked in detail. So yeah, if it's interesting, we can certainly talk about it. Yeah. I'd love to follow up on that. Um, for context, I think we wrote this originally against like psyllium one 11 or one 12. So it's been a little while. And I know there's been a ton of activity going on in the project. So wouldn't surprise me if some of it has kind of changed enough to make it a little bit easier to integrate into, but yeah, I'd love to talk about that. And then potentially, I know you tend to get that scary message when you actually start psyllium with like, Hey, by the way, a console is deprecated. Uh, so we're about to rip it out when you guys came by. Yeah, I know. Let's not rip it out because people are trying to use it. I remember talking to you about that, uh, but it continues to be like, Hey, you know, if the community's happy to like contribute and, you know, submit patches and if you're heading issues, like, you know, contribute to that. Um, if you're looking at moving to EDD then it certainly re raises that question. Cause it's just cause that I think a lot of the psyllium upstream community doesn't really exercise. So it's always that question of like, how far can we kind of go on supporting that? I think it'd be interesting for us to kind of get involved and see, like, if it's for us, like, I think at some point we'll probably have to switch if it is in fact, like, um, or if we replicate sort of that behavior that other people have talked about, just like it just isn't performing enough to be using watches and console. Then yeah, um, that's probably the way that we'll go and just do at CD and kind of deal with it. But it would be interesting to kind of get involved and to kind of help out there and see who else is even trying to use it. Cause I mean, I'm sure you don't need to maintain that code. If it's just like a handful of people here and there is, you don't want to, I think the unit tests are probably like, we have some system tests that like, that's one of the main users that actually exercises that I'm aware of. Okay. It's certainly good to, good to see other examples. Yeah. Anything else? Hello. Hi, I don't really have a question. I have a thank you for using Nomad. Uh, sure. I'm a Nomad engineer. So we should talk. We should talk. Yes. Yeah. Um, you guys are always after us to open source or firecracker driver. I heard that a bunch of the different conference. Yeah. Nice. Well, cool. We'll talk after. Anybody else wants to talk about Nomad? Oh, want to hear? Yeah, sure. To repeat. Um, it sounds like he's interested in, uh, implementing an operator specifically on liver and wanted to know a little bit more about how we, um, approached it. Uh, so a lot of it was really like we're, we kind of, we took a look at what the existing operator did and tried to figure out like what did we actually need? Cause we didn't need that much, right? Like we, we threw out leader election and then we kind of re-implemented it afterwards once we realized we actually needed it. But a lot of our focus was specifically, uh, making sure that we had all the endpoint metadata and we were cleaning things up and then making sure all the policies were, um, distributed. Those are actually two different, um, or actually let me step back. There's a couple, it's written in go, right? So there's like a couple of different go routines and things that are actually happening there. Um, specific, like there's a whole thing for endpoints. There's a whole thing for nodes. And then of course the policies and like that's pretty much how we ended up breaking it apart and deciding like that's how we approached actually writing it. Um, the code itself, at least from what I remember was pretty straightforward. Like if you look at the upstream operator, so I imagine you'd probably be able to figure it out fairly quickly, but of course you've got a template. So feel free to ask questions or ping us in discord or Slack too. Happy to chat about it. Yeah, of course. All right. Something I wanted to mention and I told Andre before is it's amazing to Sicilian being extended, but in addition to extended Celium, a large value of what you, you shared today is educational because the low level implementation of Celium, whether in a CD or the low level API of the agent are not well known by users. And I think they're actually pretty simple. Once you get to know them, and I think it would be very valuable if people knew them better. And I think this example show that it's, it can be done and that it's, it's very valuable. And, and on this, thank you very much.