 Excellent. Hello everybody. Come on in. Come on in. We have so much stuff to cover today. This is actually going to be a really fun time. Um, oh, do we have t shirts by the way? We have t shirts. Oh my gosh. Okay. We're going to have a lot of fun today. We have a ton to cover. Thanks for joining us on this talk today. We're going to be talking about AI at the edge. Um, and using this really fun technology called Nats to do it. And, um, this is also going to be really interactive. So get your like laptops ready. This is going to be like one of the talks where you can actually take your laptop out and nobody's going to shame you for it. Um, take your laptops out, take your phones out. Um, we're going to be having a great time. So before we begin, scan this QR code or even better on your laptop. Just go to this URL really quick, fill out a quick survey. This is not just for me to collect information about you, but it's actually going to be part of the interactive demo throughout this time. So be sure to do that. You can get a chance to win a t shirt and also you just have a lot more fun during this talk. Um, all right, scan it. I'll give you guys a few minutes to do so. Um, and I guess before we do that, we could go through our slides, but I'll give you some time to scan it and I'll intro myself. Hi, my name is Jeremy. I work for Sinadia. We are the maintainers of the Nats IO project where many of us are maintainers of the Nats IO project, which is a CNC F based project. And I've been working in all kinds of industries, but I get to talk about how cool Nats is now, um, which is super, super fun working in distributed systems is pretty awesome. And Tomas, I'll just let you intro yourself now. Yeah. Hey, everybody. I'm Tomas. I'm running the OSS team for Sinadia. So the maintainers of, uh, of the technology, uh, beyond that I'm coding go in rust or whatever is needed. Recently that's swift, which will come back later in the past. I work in different industries spanning from FinTech, Telco, e-commerce and whatnot across all the possible ones. Yeah. And so before we get into the slides, let's go ahead and check out how things are going over here in, uh, this app. So it looks like we have people filling stuff out. We have folks who are working in, um, technology, which I guess makes sense. Um, and folks are interested in stuff like event streaming, microservices. It's about like a pretty even split, which is great. We're going to try to address many of these things today. Um, and we have a mix of people who are newbies and also a good chunk of people using nuts and production. All right. This is going to be a really, really fun talk. Um, keep these things open because we're going to be using them throughout the talk. And I'm going to hand it over to Tomas who's going to give us a bit of a maintainer update as well as just talk a little bit about what nuts is. So let me fire this back up and I'll hand it over to you. Yeah. So nuts primer, uh, as some of you use nuts in production and is well aware of it, some of our notes. So let's go through some basics of what nuts is. So the basic building blocks is typical pops up, uh, publish, subscribe. Then there's a request reply. And I think what's interesting to mention whenever we go through those layers that actually nuts, every of the layers are built on top of the previous one. So the request reply is built on top of pops up. Then we have the persistence, uh, which is just trim, uh, which provides you at least one semantics exactly once. And basically the persistence layer to, to the nuts system. It's also built on top of request reply. So another abstraction. Then we have key value, uh, which allows you to have a key value and really nice distributed fashion because many things we talk about nuts is really a link into being distributed stuff. We just, again, built on top of jet stream. Uh, the other thing is object store. Object store is as you can expect a buckets of files, data and whatnot. Again, built on top of the jet stream, uh, service API, uh, this is an abstraction layer on top of our core nuts. So pops up and request reply. When you come in really nice and convenient way, which Jeremy will more or less show in the demo, uh, way to interact with, uh, between services in really, really nice fashion without having to deal with low level nuts stuff. So a nice framework you can say, uh, or library for using, uh, for using nuts, uh, for the services. And all of that has a lot of other stuff we could talk along like the very robust and nice security layer, which is very flexible. But what I think it's really of those many things you can find them in many products spanning across different ones. But I think what's really making a really big difference is the ability for nuts to connect things in really seamless and easy fashion. So let's, let's just take an example. You have something, some setup when you have some things in the cloud, some applications, maybe some data lake or something like that. You have also a data center for some reason, I don't know, regulations or whatever, where you have some storage of the data you cannot share in the cloud. And again, some applications. You have, again, a factory floor or whatever, um, edge, uh, things where you have some configuration of like machines on the factory floor and again, some data collection where you would like to later sink into the cloud or into the data center. And to just ask on top of it, you have things like a farage, like the edge devices, like cars, like satellites or things like this. And you have to connect all of that, which usually means it's, it's a lot of stuff. Your stack is growing, uh, really rapidly. You have to, for every single bit here, you need something, some technology, like key value, things like Redis for Kios, something like Kafka for PubSub, something like, uh, like Rabi MQ or MQTT. And then you need some service discovery, some DNS and a lot of, you can, you know, how the stack quickly grows and all of those technologies are really nice. We're not anything against them, though with nuts, you can do a really nice thing, which is you, you can, you would, well, I forget about, is what we want to have instead of this complex, uh, setup is a really smart and intelligent way to route the traffic, uh, and the communication between everything and also do it in a very secure and scalable way. So what we can get here now is the whole picture you saw in the first one, a slide before, you can put nuts in all those data centers, cars, satellites, uh, and cloud providers, run them in cloud provider and Kubernetes in data center, maybe Yomber Metal, that's totally fine. And because it's, it's light, you can run it also in the car. And what it gives you, it allows you to connect all those pieces you have, all those applications together in a really seamless, uh, and in the, in a way that are really independent on their own, but they can all see each other, they can all communicate to each other, really simplifying your stack, your service discovery, and your networking. But I mentioned you the building blocks and this is one, again, that comes in because you can also, yeah, the connectivity is the unique thing, but the fact that it can also do things like key value, yeah, maybe we can replace some configuration on the factory floor. In data center, you need some store for some, some data, you can use streams when you store a lot of your data in, in JetStream. And in the car, you might have some, again, local key value, which is automatically replicated into all your other places, thanks to nuts, leaf nodes. So what we get here, it's really simplifying the communication and really simplifying the stack because you can replace many of the, of the items you had for both connectivity and features. We've just one thing that makes the mental overhead of your developers much less because the number of SDKs you have to use, number of dependencies that you have in your code, and especially how many things you have to consider is much lower. And the same applies for operations. It becomes much easier this way because it's just one thing to maintain in many instances, maybe, but still the one thing, one knowledge base that you have to do. Yeah, so this sounds like a lot of things, right? That nuts is so many things. So it has to be some heavy beast that is really hard to run and maintain, but actually no. It's some stats here. It's, it's open source since 2011. And with all those features, everything fits in one 16 megabytes binary, which contains all the features for connectivity, for building clusters, super clusters, leaf nodes, key values, object source and everything goes. And on top of it, it's, it provides now a lot of official clients like 12, I think, and a bunch of open source community, community clients. So it's very lightweight. And yeah, that's about the basic of nuts. And now more interesting things for those of you who are actually using nuts and came here to hear about some updates. And there are quite a few that the 211 is not yet released, but it will, it will happen soon. And I think that the most welcome feature is the tracing capabilities, because if you, we also have the preview releases now, so you can play with it right now without any hustle, how you can get it. And what it gives you the almost like an ends to end look into how, how the message flows through your topology, because it can span multi geo edge back to another edge. And using the tracing, you get all the data. Then the batch get is that you can have consumers to continuously walk through the streams. But with batch get, you can get instantly set of number of messages for the specific sequences. Sorry. Then the key value semantics. So we are expanding what key value can do with things like counters distributed locks. So it will be more tailored for those things. It will be both server site and client site, like typed key value and stuff like this. Now, as, as you know, already not know, nuts natively supports MQTD, but it was missing the spark like B support, it will come with to 11. The consumer pause is a simple feature. It allows you to pause the consumer if you do some operation work administration. And you don't want to shut down anything. Obviously, you don't want to keep processing. You can pause it for a specific amount of time until a deadline and resume at, or resume at, at will when you need it. So that should make some operation stuff much easier for you. The pedantic mode is mostly for our tooling around nuts, because nuts has a lot of beyond the libraries that we have. We also have the tools for Kubernetes of Terraform, how to deploy nuts and how to manage it. So this will allow us with all the advanced dynamic configurations of the nut server. This will allow us to keep in tamed the dynamic configuration in nuts so all the declarative conflicts will work seamlessly. On the client sites and OSS site other than the server, we're adding the middleware to the clients. So it will be much easier for any of you who want to contribute or just use nuts to extend it without altering the library or building layers on top of it. And after that will come something we call orbit libraries. So right now we have this problem that if you want some advanced feature in the client, it's, we are not always merging every proposal because we have to make it consistent across the clients. But with orbit we'll be able to focus the clients, the SDKs on performance and reliability and stability and put all the other extensions, which much easier contribution and smaller scope into the SDK separate libraries, which will be the orbit thing. There's not there yet, but we'll show up soon. And the last one, but not least for sure, is that we added the Swift client, which is officially supported. For now it's just the core nuts, but it will get support for Jetstream release soon. And then for key value, object store and others. So that's for those updates about nuts. Then there's also thing called NEX. NEX is a use case how you can leverage nuts to build a workload scheduling platform on top that is leveraging nuts heavily, is an add-on to nuts, something that we open source. And yeah, it allows you for really nice workloads, running workloads using with leverage of nuts on everywhere where, everywhere, maybe not everywhere, everywhere firecracker VM runs. And you can run Linux binaries, WebAssembly and the JavaScript. Yeah, but we came here to talk about the AI and the nuts. So how this works with nuts. As you can see here, the example I showed you before, how nuts can simplify a lot of architectures, also applies to all the AI workloads, because a lot of elements that work really nicely with standard general use case of applications work really nicely with AI, especially with the ability to mirror things between the edge and offload big computing things. So it really works really nicely with nuts. And this graph is not just a graph, theoretical graph that we made. There are companies, especially in the EV space that are using nuts to drive and their business. Yes. So with that, I'm hanging over to Jeremy to show you the demo, how that can actually work. All right. Thank you so much, Tamash. If you came in here a little bit late, we have a demo. Go scan this code, fire up your laptop, because we're going to be having a lot of fun for the rest of this talk, talking through all of these things that Tamash just talked about. We're not just going to chat about them. We're actually going to play with them and use them today. So go ahead, fire up your laptop, your phones, get onto that particular website that we have. By the way, if you fill this out and you see this screen, then you are totally good. This is all driven by nuts, by the way. I just have a front end and I'm using WebSockets to connect into the nuts layer to drive this whole demo today. There's no back end, which is really, really fun. And you can actually have nuts hold a lot of things in that way. Let me get my little checklist here, because there are so many things to cover. So as Tamash mentioned, nuts is really just a message broker at the end of the day, but we've layered a lot of really cool technology on top of it as these core building blocks. And so I'm going to start from the very ground floor, and then we're going to move up the stack into our use case today, which is AI at the edge. I'm sure you're probably wondering what this thing is over here. This is a Jetson Oran Nano. It's an NVIDIA single board computer that we're going to be using today for inference. And so if you're interested in doing AI stuff, obviously there's training, and then there's inference. We're going to be doing an inference use case today, where we're all going to be kind of playing with this little Jetson Nano to do some like facial object detection kind of stuff. So it's going to be a lot of fun. And so let's talk a little bit about Nats to begin with. First of all, Nats is just a little server that you can fire up. It's really easy to fire up. I'm running one right now, and I can simply just say Nats server. And there we go. We got a Nats server running, which is great. Like I said, it's a message broker, so it has the request reply semantics. And we also, instead of just giving you SDKs, we give you a CLI, so you can play with this stuff. Nats is really adaptive and it's really flexible. And so like play is like a huge part of being able to experience Nats. Don't think about all the other technologies that you've worked with where it takes you an afternoon just to get like a little proof of concept going. It takes you like an hour to get a proof of concept going because it's so easy to play with in the shell. And so I'm going to bump up this. Can everybody see that? All right. Is that big enough? Yes. No. Sweet. Okay. So I can simply say something like Nats sub, um, hello dot world. And then I'm going to just subscribe to that. Now I'm subscribed to hello world because I'm talking with my Nats server. And I can just say Nats pub, hello dot world. And I could just say, um, you know, hi, hi. Great. We've published a message. That's great. We could publish a thousand of them if we want, if I can type correctly. And you can see that all of those go through, which is awesome. Nats is actually really fast. Um, and we even include a benchmarking command, which is great because a lot of people ask us questions about like, Hey, how does Nats perform in this way or in that way? And we can't always give you the right answer because it's going to depend on your use case and your hardware and everything like that. Because Nats just does so many things. What you can do is that you could run a benchmark. And so I'm running a benchmark right now publishing 10 million messages. And you can see Nats is pretty dang fast, right? 6.6 million messages a second and not, not, not too much to shrug off. And by the way, this is core Nats. It's not doing any persistence or anything like that. We'll build into that. But as you could see, Nats is really easy to handle, really easy to work with. But one thing that a lot of message brokers don't do and that we usually lean on HTTP for is request reply. So I could say Nats, reply, um, hello. And I'm just going to tell the Nats CLI to echo whatever it gets back. And I could say something like Nats request, um, hello. And I'm just going to say hi, hi again. And it should respond with hi, hi to me. Now I'm not going to explain the turtles with how this works, but you can start imagining, Hey, maybe I can replace my HTTP layer. And instead of, you know, using GRPC, I could use Nats for this. And that's exactly what we're going to be doing today. Let's start moving into something a little bit more interesting here. I'm going to switch over to our global network, Sanadia cloud, instead of connecting here, just so I can talk with all of you guys. All of you guys are connected right now to Sanadia cloud. You might even see a little server up top and an actual latency or ping or a RTT up there at the top left. That's a server that you're connected to in our global network. And you're all still able to talk to each other. I'm able to talk to you. So I'm going to go ahead and say Nats, um, context, select. And I'm going to select my kubecon CLI demo so I can start seeing everybody. Now one really interesting thing is we provide a microservice library on top of Nats as well. Remember that request reply? Great for microservices. I can even say Nats micro list. And this is going to list out, um, in a long table, all of the instances, this is all of you, by the way, you've all been assigned a random string and you are representing the kubecon service right now, which means I can talk to all of you or load balance requests between all of you. Um, and this is really cool because it's just so fluid and dynamic. We also have this AI detect service and that's what's running on this box over here. And so as you can see, there's a lot of fluidity into how I can manage these things. I could even say something like, um, I've been running this and I just ran a test right before we started this. So I'm going to be able to get some stats on this. I could say Nats micro stats for AI detect. And this is going to also return stats for me. So not only do we have service discovery, but we also have lightweight monitoring and observability. And we didn't really have to do much with this. Like the only infrastructure we're running is a Nats server. That's it. And everything else is, is built on top of all of these really cool messaging paradigms. So, um, let's move on a little bit more to, um, to things like metrics collection, because especially when you're doing things like AI training and you're doing stuff at the edge and everybody's trying to spread all over the place, how do you collect those metrics in like a really, um, nice way? Well, you could use Nats messaging for all that. So I can, um, say Nats sub, I think, I think this is it metrics dot greater than, okay, cool. These are all metrics that are being emitted by all of you. And I was able to just tap into it because it's something that's available in Nats. Not only that, but I could take this and I could actually put this into a jet stream, an actual stream and save that data with all different parameters into how I want to do that. It's actually very easy to do. I could just say Nats, uh, stream, create, um, metrics. And I'm just going to say, you know, the subject is metrics dot greater than, and, uh, let me wrap this because my shell does not like that symbol. And I'm just going to say all the other stuff is, you know, is using the defaults. Oops. Doesn't like that. What did I type wrong? Um, unknown subjects. Thank you. So I could type this in. Apparently I don't know what, how plurals work. And so now we have a new stream that shows up here for metrics and we already have a hundred and one messages. And those are all metrics that are emitted by you guys. I could take this data. I can move it wherever I want. I can replicate it wherever I want. It's all very fluid and dynamic. And so, um, there's a lot of really neat things that we could do with this, but I want to move into more of the AI use case. So imagine for a second that we have all of your microservices that are distributed everywhere. And I want to be able to, let's say, you know, a use case that Tomash use, which is like configuration configuration is always, you know, for some reason difficult, but it's, it's simple at its core. It's just that we haven't really come across a standard that's going to really work for configuration, but Nats KV is actually a really good way to express configuration. So I can do things like, um, you know, live update, you know, certain parameters and configuration. Maybe I want to turn all of these purples into red. Well, I can do that very easily. I'm just going to say make, make a config change. And this is just going to update a key value that I don't remember the name of anymore. Um, and everybody's things should be red now. Um, this is cool because for your microservices, for any sort of your config changes, you can now design applications that are reactive, that don't need pod restarts or anything like that in order to work. Um, I'm going to go ahead and just reset that config because I don't really like red. Um, but it's a good way to, you know, um, to push down config to everybody. Um, let's see, um, config reset. All right. There we go. Are we back to purple? Maybe, um, make config reset. There we go. We're back to purple. Okay. Good. Good. Production outage is no longer there. Um, okay. So there's a lot of other cool things that you could do. We did some service discovery, but what about load balancing? Um, now all of you guys are microservice and you're all responding to similar endpoints. What if I wanted to load balance between all of you? Well, I can do that because Nats has a feature called Q groups where you can all belong to a single group. And it's also really smart about where geographically you guys are located, what kind of server you're connected to, to where it's going to find the fastest responder for me, but it's also going to load balance in the context of that group. So I can do some fun things like, um, let's see, Nats request Qubecon.nickname is going to print out somebody's nickname here. Um, there we go. Problem child. Excellent. So I found the best nickname right off of the bat, but it's going to, it's going to continue to, um, it's going to continue to use, uh, folks' names here, which is awesome. Um, and this is kind of like representative of maybe you're running N number of services and you want to load balance between them easily. You get that out of the box too. So take your engine X and just throw it over there. Um, so that's really fun. Um, and just to kind of show you guys also how the whole fastest responder thing works. I thought it would be a really fun game. Um, and, and folks can win a t-shirt is to do like a quick draw, right? Where I can say, hey, I'm going to click a button. Somebody's going to get a popup on their screen and the first one to hit that button, you get a t-shirt. All right. So, uh, John Dewell over here, he has t-shirts in various sizes. I'm going to do three quick draws. So everybody gets your little thing. He's ready. Get your trigger fingers ready to hit that button. And I'm going to go ahead and do a quick draw. Um, and again, all of this is just NATS based. I didn't build anything fancy with this. This is all just using NATS messaging. So I'm sending a request and the first one to respond is the winner. Okay. Um, ready? One, two, three. Who's going to respond? Manu. We have you in the room right here. It's because he was physically closer to me. Now it's actually like trans, transcends space and time. You get a t-shirt. Uh, John Dewell, uh, actually, yeah, hit up John Dewell afterward with your, um, with your shirt size and we'll get it over to you. Okay. We'll do two more. Ready? One, two, three. Who's go NATS exclamation points? That's you. All right. Awesome. Congratulations. Seasoned on John Dewell afterwards. He'll get you a t-shirt. One more. Um, I'm sure everybody's trying to figure out how to hack this or probably already has on the console. We're at a tech conference. I should have expected it. Nicolo. Awesome. Congratulations, everybody. Everybody give Nicolo go NATS and Manu, uh, round of applause for winning that. Um, okay. So now let's get into the AI use case because this is actually the more fun part of the, um, of the talk. So, um, I'm going to share my webcam over here and what this is going to do is it's going to call this just like we called you guys as microservices, but it's going to be calling this box specifically, um, to send. It's going to essentially just send a base 64 encoded JPEG over here and it's going to return bounding boxes of what it's detected. And the cool thing is because this is a dedicated AI chip, it can actually do it pretty fast on my Mac studio. This algorithm is actually really slow. It's like two seconds per frame on this tiny little thing. It's like 60 frames a second, which is actually really, really neat. Um, and so I'm going to go ahead and share my camera and I'm going to hopefully turn it on. And you could see, um, if I bumped this up, maybe a little bit, you can see a bounding box that says I'm a person and you can kind of see that it's chugging along a little bit. The reason for that is the, the RTT latency, even because it's sending this up to the cloud and the cloud is sending it back to the box and the box is sending it back to the cloud and then it's sending it back to me. That's pretty inefficient. So we probably don't want to do that for most use cases. I mean, this is maybe fine, you know, for some use cases, but what if we want something that's more near real time? This is where I'm going to go into something, um, called leaf nodes. So Jamash mentioned that there's the whole like arbitrary topology around that, um, is actually one of the more fascinating topics. The fact that I can have a system in the cloud and I could bring that system down and I can kind of sync them up. And that's exactly what I'm going to do today. So this use case, it's, it's cool, but it's a little bit slow. So why don't we make it a bit faster by just running a net server here on my local land and, uh, and then, you know, have very, very fast latencies when it comes to that. So I'm going to do, isn't going to take this net server that I ran down and I'm going to run it with a configuration that's going to run the net server here, but hook it into a cloud. That way I get fast latencies, but you still get all of the fun stuff. So I'm going to run it there and I'm just going to run this, uh, detection thing again, but I'm going to, um, connect it to that leaf node. So we'll wait for this to spin up and hopefully fingers crossed Cuda doesn't have to take forever to get its subsystem going. Okay. We're back up. We're connected into the leaf node. Um, and then I'm just going to run this, uh, browser tab. So you can see fingers crossed. There we go. You can see, um, I'm connected to Jeremy's laptop and my RTT went down to, you know, one millisecond, three milliseconds depending on what we're looking at. And if I fire up this object detection again, you're going to see much faster response times. And this is really the power of being able to have locality to your communication stack. The fact that I didn't have to make like really any configuration changes beyond what I'm initially connecting to is actually a really, really powerful, um, mechanism. And so there's a lot of really, really neat things that we could do here. Um, the last thing I want to close with is for those of you who may be daring enough, I'm going to be doing one more quick draw, but this quick draws a little bit different. The first person to click this button is going to be able to share their webcam and I'm going to be able to observe it. All right. Um, so I'm going to click this handoff button and whoever clicks it first, uh, you're going to see a little webcam like I do top right corner, hit that webcam, share your webcam and hit that object detection because we want to see your beautiful face detected by this chip. Um, okay. All right. One, two, three, click it. Let's see. Anybody get it? Handing the camera off to Palmo. It's time to shine. All right. I'm going to hit view and see if, uh, you can share his webcam here. This is the jankier part of the demo. I'm not expecting it to fully work, by the way. And it's just because I'm a bad coder. Um, but let's see if we can get it working. It might not work on like iOS devices and stuff. So that might be a problem. Um, yeah, exactly. I didn't have time to do that. Um, but as you can maybe see, if somebody is able to also share and we would be able to get, oh, there we go. All right. So not only is not like the thing I want to stress here is like, this is all Nats. This is all nothing but Nats. The JPEGs that you're seeing kind of streamed over the bounding boxes, the microservices, the storage, all of it is driven by Nats. And so hopefully this leaves you guys walking away like impressed and wanting to play with Nats because, um, this isn't just some fun, silly demo. Like this is a collection of what we've seen real companies implement and how fast they're able to move with this. And so, um, with that, I think we'll go to our closing slide and, um, and then we'll take any questions if we have time. So, uh, thank you all very, very much for, um, taking the time to listen to us. There's tons of resources about Nats. If you want to just take a picture of that, we have screencasts. We have, you know, lots of docs and demos and stuff like that. But, um, yeah, I think it's time for questions. We have time. Thank you very much. So any questions? Raise your hands. I'll get back to you in the back. And we just showed a lot. So might be a little like, just stand up and wave to us. Okay. Hey, so, uh, I'm a systems engineer, DevOps infrastructure guy. So how hard is it to operate? We want to do like stateful workloads. Data is important. Will I like be able to sleep at night? Yeah, you will be able to sleep at night. It's pretty easy. Actually, what Jeremy showed you, how he starts up the server to start it up with the persistence layer, just pass another flag for the jet stream. And you can of course pass a flag where you want data to be stored. And then Nats takes care of if you have a, uh, about the replication, a high availability, you can create streams with, uh, higher replication factor of like three or five free is the most use case. And Nats will take care of replicating the data. So the only thing you have to provide from the persistence layer is, uh, just stable pretty fast network and IO and disks. And Nats will take care about replicating it across. So it's actually the same binary run. I don't want to make it sound too good to be true because there is a lot of operational simplicity to it. But I would say that the biggest kind of like, uh, especially if you're adapting to jet stream is like there's new concepts to learn and to continue to master. So it's operationally simple, but there's, there's more stuff to kind of learn about. And as soon as you're like storing state on disk, there's always more complexity. So it's the trade off of like, you don't have to learn and master like these dozen technologies, but conceptually you still have to be, um, you know, operationally aware that like, yeah, you're storing state on a server. There's going to be things that go wrong. How do you want to do failover? What are the random stuff like already familiar with distributed systems should be fine. And yet that was probably the distributed systems is the biggest, uh, hardest thing to handle here. I was, I mentioned it briefly, but I think what's worth mentioning is it really nicely runs in different deployments mode operation modes. Like we have home charts. It runs nice on Kubernetes. It also runs really well by metal or on small devices. Great question. Is there like a dashboard or something so I can quickly get like an insight? What's actually going on in my net server? Yeah, that's a great question. So much, you want to talk about that a little bit and I'll bring actually bring it up while we talk. Yeah, there is, there is such a product that we have, uh, it's native that provides you a way to have a really nice look into, what you can do and operate nuts in a really simplified way. And this is what, uh, what, what Jeremy shows. Yeah. I mean, so we as Sinadia, we do have a dashboard product for that. Um, I think there might be some open source stuff in the ecosystem around that as well. We don't officially provide an official open source one, but there are, you know, different options here. But just to show you guys, like here's all the connections of your micro services that are now showing up in real time. I can see the RTTs. I can see what you're subscribed to. I can manage streams and user credentials and all kinds of stuff via Sinadia cloud and then our, uh, our kind of like self hosted or us hosted parallel to that, which is, um, Sinadia platform. Yeah. And that's exposes a lot of, a lot of APIs that you can subscribe to, to, for whatever need to, to, to get this data in whatever format, uh, to. So it's very easy to build something also for yourself or some, uh, some data. And this is a maintainer talk. So I'm not saying you have to use this, but, um, it's there. It's available. This interface only works with, um, your version of net. So can I connect any net server to that? Um, it will connect to any net server. Yeah. Okay. I think we're out of time, but we'll be here. We're here. Come up. Come up to us. We will try to answer all the questions once again. Thank you very much. And see you next time.