 We're going to kick this off, I know we're about one minute until, but we have so many things to cover. So thank you for traipsing all the way across the conference center, this massive conference center to come see the talk. Today we're going to be talking about how NAT is designed to deliver AI workloads at the edge. And man, do we have a full schedule today. We have a lot going on. So I'll explain kind of what's going on, but we're going to have a lot of fun today. We're going to be giving out T-shirts, NAT's T-shirts. And we're going to be having a lot of fun just talking about how NAT can facilitate all of this fun communication and integrate AI and things like that. So we'll introduce ourselves, but before we begin, the very first thing that we're going to do is we're going to take out our phones, our laptops, preferably a laptop because maybe you might want to share your webcam or something like that. But yeah, go ahead and fill out a survey. This is just going to be a survey about NATs and where you're at with it. And this is also going to be kind of the foundation for our demo today. We are going to be talking about NATs, but I actually feel like, and Byron and I both agree that we could talk for hours and hours about NATs, but really where the magic comes with using NATs as a project is actually seeing it. You got to see it to believe that kind of thing. So most of this talk is actually going to be demos. It's going to be highly interactive and it's going to be a whole lot of fun. So who the heck are we? My name is Jeremy. I work at a company called Senadia, and we are the shepherds and maintainers and authors of the NATs project, and we work on very big distributed systems problems in all kinds of industries. And we've recently been just doing a whole lot at the edge. I'm also a long time gopher or go programmer. I've maintained and been the author of a lot of different popular open source libraries. And fun fact about me is I've bounced around a bunch of different industries, moved from engineering to product and had a bit of a hiatus from engineering and now back into engineering again and working on distributed systems, which is super fun. Byron, you want to introduce yourself? Yeah, sure. Sorry, my voice is a little shoddy, but I'll try to bear with me on this. So yeah, I work at Senadia as well, focused on developer relations, sort of education, docs, producing content, all that kind of stuff. So I've been here for some time, long time NATs user personally in a previous job. NATs project maintainer, I co-host the NATs FM podcast. If you've happened to be aware of that, go to NATs.fm as a URL if you haven't already. And we're going to start getting into some NATs 101. Okay. Yeah, before I go into demo mode, let's just quickly look at what our survey is looking like. Okay, cool. So we get we get a good spread of stuff. We have, you know, everybody, lots of people in technology, but other other sectors, which is good. I feel like NATs does really good in some of these sectors like finance and anything really like at the edge. And it looks like we have a pretty good like set of different interests that we have. Microservices, we'll cover some of that today. Certainly a lot of streaming and edge. And it looks like a lot of people have heard about NATs. Some are complete newbies and quite a bit are using NATs in production, which is great. We're going to make sure that this talk is suitable for everybody. So you're going to be able to take away anything, whether you have no idea what NATs is, I want to learn about it, or you're a NATs pro and you want to learn about some really cool new things. We're going to we're pretty confident we're going to have a really good time covering some some neat topics. All right, back to the slides. Mr. Byron, you want to talk? You want to talk to us about NATs? I would love to. OK, cool. It's kind of my job, so that's cool. All right, so we're just going to kind of quick through go through a few motivations, sort of observations that I would say is represents the ethos of sort of how the NATs maintainers think about sort of the technology space today and sort of argue up why NATs and give you a little bit of context and then we'll sort of transition right into sort of AI at the edge, why we're even talking about that today. And then we'll go right into the demos. So this is probably a hot take, maybe not to some of you. Not sure if there's a bunch of NATs users here. I think mainstream tech today and this conference is is one example of that, that there are so many technologies out there. There's so many layers upon layers of abstractions that you can pick and choose from every, every one of these things are fundamentally making, you know, solving a problem. But the way that sort of the NATs maintainers think about the technology landscape is that maybe if we just change the fundamentals, maybe if we change, you know, the basis of what we're even concerned with, maybe we don't need all these layers. So if we look at a modern open source stack today and I love all these technologies, these are no dig on any of these technologies. It's more of a point of saying when you pick and choose and trying to build a distributed system that requires these capabilities, it's really hard to actually compose these things together in a consistent way in a coherent way. And not only as it from a developer standpoint needing to be able to learn all of the different APIs, all the different SDKs, you need to architect this thing, you need to scale it, you need to operate it and that can be very challenging. And we would posit that the fundamental issue is actually at a very much lower level, that we don't have, you know, we're limited to one-to-one communication styles. We have to rely on DNS, you know, the old adage, DNS always fails. It's always a problem. We have different security models that are not really decentralized or per... I can't say that word right now, so I'm going to skip it. Great. We have sort of multiple technologies to learn and operate and there's a lot of complexity. So what we really want, and this is sort of how, again, the NATS maintainers and sort of the basis, the original release of NATS in 2011, 2010 sort of came to be is that if you base your technology stack on end-to-end communication, you immediately have more options, more of sort of options for different communication styles. We're not relying on DNS to do service discovery anymore. You can connect into this thing, this sort of distributed network and you can address, you know, communication over subjects like names and it's much simpler rather than needing to need location of different things. A single platform to sort of learn against as a developer and operate and architect against and really fundamentally you have a single consistent security model that you can kind of pervade no matter how far out you need to scale your system. So this is fundamentally sort of what NATS has been solving and aims to continue to solve and evolve over time is that it's really a connectivity technology that for your services in your data and being able to sort of like, regardless of the topology, regardless of the scale, you can LEGO brick these NATS servers together across any geo, any cloud all the way down to the edge and you have one consistent connectivity layer to use. So a little bit more concretely, NATS is open source in 2011. It's a 16 megabyte go binary. You have a bunch of clients official clients in various programming languages so you can pick and choose depending on how you want to model your service. You're not limited to a single language. We support seven different operating systems with additional we release those distribute those across seven different architectures and there's even a new one in 210, which is like IBM ZOS, which is kind of cool. So if you run that you can run the NAT server on there now. And yeah, the community is awesome. We have a very, very active slack group. So please join. There's resources at the end, but it's slack dot NATS dot IO if you have not joined yet going back to this. A lot of technologies are really emphasized for a single persona single role in your whole engineering team, a developer, the operator. NATS is great for across the board. And these are a handful of points that people love NATS. They can get along. It's consistent. It's simple. So it's a big advantage here. So since this is a maintain a track talk, we want to talk about sort of reflect on the year so far. We had a really big NATS 210 release in September. There's a whole bunch of we're not going to get into these details. There's a whole webinar out there on the on our YouTube channel if you want to check that out that goes through a lot of these things. But we have a whole bunch of new features and new capabilities, a lot of optimizations. We also have been working on some new client stuff, a new helm chart that's going to make it easier to run on Kubernetes as well. So the title of the talk, we're talking about AI at the Edge. Why are we talking about that? So we're not trying to advocate, you know, AI at the Edge is, you know, a thing that you should be doing or need to be doing. I think we're based on this, this line here. It's, it's inevitable that it's going to happen. And we're up here to argue that NATS is a brilliant technology to enable this use case. So this was a survey from Accenture of 2,100 C suites across various industries. And as you can read in countries, 83% is a pretty big number. And we see this, we see this with our customers. That's an idea. We see this just in the wild that people are trying to use NATS and want to leverage NATS for this use case. So just at a high level, sort of like as a visual, where does NATS fit in the picture? So we have various data centers. We have public clouds. We have factory floors. We have vehicles. We have, you know, some arbitrary Edge device. NATS servers and NATS clients can live in all of those places and they all connect to one another and you have that, that single connectivity layer. You have that same security model. And that's sort of what we're trying to articulate here today of how flexible and adaptable NATS it actually is. And so a little bit more concretely and Jeremy is going to kind of touch on a lot of these things as well. You can design your topology that meets your needs for your use case. You can again stretch the NATS topology as far and wide as you want. You have one single connectivity layer for your application security model and you have like streams, persistence for storing and forwarding data, which is common for collecting telemetry and then pushing data back down to the Edge. And then a lot of actually our users are using our embedded object store to be able to store models. And actually once the model is stored, all of the clients, all the edge locations actually observe that and they update their model locally and they can, you know, start serving that model. So. All right. There we go. Yeah. That was great. Fantastic. Everybody get by right around of applause. Awesome presentation on that. Yeah, this is awesome stuff and it's cool. I mean, going back to that diagram, that's not just stuff that we made up. This is stuff that we're like encountering every single day. Byron mentioned that it's inevitable that like AI is going to become a competitive advantage for a lot of organizations. And so if you're playing with AI or if you're starting to look at AI at the Edge, the way that we've done things with cloud, it's not the same as Edge and you'll learn that really, really quickly. And so we've been also learning and adapting with our customers who are really saying, hey, we need to start bringing AI out to the Edge. We need to start working on things at the Edge, deploying workloads, deploying data at the Edge. It's a very, very different set of constraints to work with compared to the cloud. And so, and this is why we think that's a really good fit for this. All right. So everybody take out your phones again. You're not going to get a lot of a lot of talks that are like, hey, take out your laptops, get on your laptops, take out your phones, take out your laptops. We're going to be going back to this demo. If you were late to the party, go ahead and fill out that survey because the survey is really kind of a gateway into some really fun demo time. And if you want to get a free t-shirt, you might want to sign up and make sure you go through that demo. Okay. So let's get into demo mode real quick. All right. So I'm over here on, actually, let's actually jump into my CLI. Everybody can see that. All right. Maybe bump it up one little tick. Okay. So Nats is very fun to show off. And I kind of prefer the show don't tell model. So I'm just going to jump into my terminal and start showing you guys what Nats is. Byron mentioned that the Nats server is really small. It's operationally simple. It's easy to run. And I could just prove that to you right now. You just run Nats server. And I have a Nats server running. All right. So I have a local Nats server running on my laptop. And I can start doing all kinds of things. I'm going to say Nats context select default just to select my local default Nats server. And we'll switch back to the cloud in just a second. So the basis of Nats is it's really under the hood. It's a PubSub model. All right. And we layer a bunch of other things on top of that to make it a fully flushed out communication layer. But I can simply say Nats sub hello dot star. And that star, that's a wild card. Everything in Nats is communicated through these kind of subject-based addresses. And they're all token-based. So you can do fun stuff like wild cards and things like that. But I can just subscribe to this subject right here. And then I can easily publish to this subject with Nats Pub. By the way, this is the Nats CLI. It's basically a fully flushed out client. It's built and go. It does everything any of the other clients can do. And it's really awesome because you don't have to build your application and then wonder, oh, is it going to work on Nats? Instead, you just use the Nats CLI to kind of make sure your deployment's working or your Nats server's working or anything like that. So I'm going to say Nats Pub hello dot Jeremy. And I'm just going to pass in a payload. Now, payloads in Nats, I'm sending a message on a certain subject. Payloads in Nats are, Nats is payload agnostic. Meaning I could send anything. If I want to send ProtoBuff or I want to send the JSON or Bson or Captain Proto or whatever, I could send all of that stuff through. I'm just going to be sending a string saying hi, hi. And then you could see that on the other side, we get it received. And I can also just send like a bunch of these things. And Nats is actually really fast. So one of the cool things about it is the Nats CLI gives us a bench command. So I could say Nats, a bench. And I have a history of this. There we go. So we have one publisher, one subscriber. And I'm just going to publish hello world a bunch of times. And so what, 10 million messages. And so we're clocking in about like 5.2 million messages a second. That's pretty fast. So you don't really have to worry too much about performance in most use cases with Nats. So you could do Pub sub, but you could also do things like request reply. So I can say Nats, reply. I'm going to just call this echo. And I'm going to make sure it echoes back whatever I send it. I could say Nats request echo. And I'm going to say like hello world. And it's going to reply back to me with hello world. So yeah, you don't need HTTP. You don't need GRPC anymore. You should just use Nats for all of this stuff. The other cool part about this is I can actually load balance between all of these. So I can just fire up another, you know, supplier for echo. And I can just start sending more requests. And it's going to load balance between them. And so I could just say like, you know, count thousand. And these things will all count up. And you can see that it randomly distributes the load. These things can come up and come down as much as you want. So you could throw your load balancers out the window as well and just use Nats to handle all of that. OK, cool. So we have request reply. We have Pub sub. We have a couple other things that we could do that's really cool. But first what I'm going to do is I'm going to jump on to the cloud because I'm not going to just run my local Nats server all the time. So I'm going to say Nats context select. And I have one for KubeCon somewhere. Here it is. All right. So now my Nats CLI is going to be pointing to the cloud. The cloud that you guys are on, on your little websites, it should say what server you're connected to. Maybe somewhere in AWS East 2 or something like that. At the top left. I'm going to be doing the same thing here. And what I'm going to do is I'm going to actually, we call this the wiretap mode. So I could put in a wildcard that's just a catchall. And I could get everything that's going on inside of that particular namespace for Nats. And so I'm going to say Nats sub. And I'm going to use this special little wildcard over here. You can see I'm getting all kinds of stuff. So all of your guys' clients are actually emitting some metrics. And this is just kind of like dummy data that you guys are emitting. But you can kind of see I'm getting this all in real time, which is really, really fun. Now one of the things that we didn't talk about is everything I've been showing you so far is kind of a fire and forget publish and subscribe. There's no persistence, right? But we actually have in Nats a subsystem called JetStream, which is all about persistence. And so one of the neat things about Nats is you can kind of mix and match this fire and forget with a persistence layer. And so you can start saving things into a stream if you want. We have a bunch. I mean, we can't cover it all today because JetStream is immense, but it supports all kinds of different storage patterns. But one of the neat things you could do is it interacts with core Nats just fine. So you guys are emitting something on this like metrics, this metrics URL right here. So what I'm going to do is I'm going to switch over to our platform, Synadia Cloud, which is just a UI for Nats and also a huge global, globally distributed Nats cluster, super cluster. I'm going to go ahead and I'm going to create a new stream just called metrics. Let's see, metrics. And I'm going to just say, give me, ingest everything from the metrics subject, and anything after that. I'm just going to hit save. And that thing is going to start getting messages in it right now. So now that metrics stream has like 124 messages in it now. And I can actually start replaying that however I want, right? We have multiple kind of replay and consumer models for it, but I could just start storing data as it's flowing in, which is really, really neat. Again, there's a whole lot to jet stream that we don't have time to cover, but it's really, really cool. And I wish we had all the time in the world to kind of showcase it. We just wanted to tease you. We just wanted to tease you. Yeah. Okay, cool. So let's move on because we have so many cool things to cover, especially with AI at the edge. So what do I have right here? Okay, this is the Oran Nano. It's part of the Jetson family from NVIDIA. And it's a kind of a little single board computer. Think of it as like a Raspberry Pi, but for like AI and has CUDA cores and stuff like that. It's really, really nice because we could run all kinds of cool AI based applications. In this case, today we're going to be doing some like object detection. So I'm going to be turning on my webcam and I'm going to, you know, I'm a person and this is a bottle and things like that. So let's actually start firing that up and see where this takes us with Nets. How's that sound? Okay, so the first thing I want to show is Nets actually like has first class support for services or microservices via our services API. And what that means is when you have a Nets client connect, you could tell you can have it act as a service that has multiple endpoints and documentation and things like that. So you kind of get this like free service discovery out of the box. And the cool part is we have first class support for inside of the Nets CLI. So I can say Nets micro list. And we're going to get a couple, we're going to get a lot of things back because we have so many people connected here. So I'm going to scroll up. I'm going to, there we go. Even this out a little bit. Okay, so it's cool. So we have this AI detect. This is object detection for images using Darknet and YOLO. YOLO is you only look once. It's a object detection algorithm that's really fast because it only looks over the image one time. And that's actually that service is running right here on this thing. Okay, and this is connected in to, you know, my computer via LAN and I'm doing internet sharing just because, you know, I can't have a router here on stage for y'all, but we'll make do with what we have. So this is running our algorithm as a micro service, meaning any sort of Nets client could just call it. It could send it an image and it can get object detection data back. And that's exactly what we're going to be doing today. All of these other services that you see are you guys. This KubeCon service is a KubeCon attendee service and every single one of those browser sessions that you have open to that website is actually creating a little service and I can call out to those as well. So let's actually do that. So I'm gonna say Nets requests. I can start sending requests. And remember, all of these things can be load balanced between each other, right? So I'm gonna be doing a load balanced request to the kubecon.nickname endpoint, which all of you are hosting. And we're going to see who's going to be that lottery winner for kubecon.nickname. Let me pass in just a thing, payload. And Jeremy's the first winner. John Noel's the second winner. And Adam is the third winner. Okay, we won't give teachers for that because you guys didn't really try very hard, but you can kind of see how we are load balancing between folks. I could just keep going and start having people's handles continue to show up there, which is really neat. So let's kind of play a little bit of a game, all right? We're gonna do something where we do try a little bit harder to load balance between each other. And we're going to get the winner's a t-shirt. How does that sound? We're gonna say Nets request. And where are we here? Something along the lines of, what was it called, quick draw? There we go. Okay, cool. So I'm gonna send a request that's only gonna have one reply. I put a 20 second time out just in case people are slow to the draw. But here's what we're gonna do, all right? When I hit that button, you're gonna get a pop up on your screen, okay? The first person to click that button, they're gonna be the winner, okay? How's that sound? All right, so get your fingers ready. We're gonna get like tapping or clicking or whatever. Okay, ready? Three, two, one. Okay, Krista, where's Krista? All right, everybody give Krista a hand. All right, Johnna, well, do you want to hand Krista a shirt? Krista, what, actually Johnna, well bring a bunch of shirts and you can figure out what size Krista is and we'll get that figured out. Okay, well, does anybody wanna do a couple more of these? That sound fun? Okay, by the way, what this is illustrating is again, load balancing and how Nets can kind of take care of a lot of these things just by the fact that it's a single communication layer, right? All right, we'll do a couple more. Three, two, one. Jan. Jan, all right, Jan. Okay, we'll do one more. Ready? One, two, three. Aguio, raise your hand. All right, awesome, we'll get a shirt over to you. Okay, so we don't have a ton more time so let's move on to the AI at the edge portion here. I think we showed off enough of all of this stuff. Let's jump into the demo and let's show off how we can do some object detection. So yeah, you guys can see that pretty much all right. What I'm gonna do is I'm connected to the cloud just like everybody else is right now and I'm going to share my camera so you can see my mug over here. Hey, hey, hey. And then I'm gonna turn on this object detection and you can see that hopefully, yeah, yeah, you can see that it's identified me as a person. You might have to look a little bit more closely. Yeah, it's gonna identify Byron as well. Let me bump this up so we can kind of see it. It's a little finicky when it switches between us but that's just how I wrote it. I'm a bad programmer. Okay, so I can, it will detect like a bottle and everything like that. And this is a really cool kind of implementation of AI at the edge, right? Because there's so many different things whether it's traffic cameras or whether it's anything that needs to do some sort of computer vision and identification. But the really neat thing about this is if you guys, you might have noticed if you were snooping, go click that little eyeball at the top right of your screen. And you can see that just like I was snooping on all of your metrics, you guys are now snooping on not only the object detection data but you're snooping on all of the frames that I'm sending. And this is like one of the really cool things about NATs is that, sure, there's a security model that you could tie all of this together and make sure you lock all of this down. But you can also, you know, intercept a lot of messages and you could use this for logging. You could use this for ingest for your streams. You could use this for all kinds of stuff. And again, I'm on the cloud, you're on the cloud and we're sending all of this together. So we're gonna do something a little bit more complicated before we get into our Q&A. Again, we had so many things to cover today. Now, this is a little bit laggy on my end. It's okay, but like if we are doing traffic cameras and things are going like 80 miles an hour or whatever on the toll roads or something like that, this would not work well, right? Really, one of the things that's important for the edge is this idea of running things locally, being able to make decisions locally, being able to synthesize data locally. And so if we were kind of tromboning back and forth to the cloud, even if we are close and we're connecting to a node that's like here in Chicago and we have, you know, 30 milliseconds of RTT, which is pretty good, it's still too slow, right? And so what we're gonna do is NATs, but like Byron mentioned, NATs has these kind of arbitrary topologies that you can Lego break together. And one of those kind of primitives we call a leaf node, which basically means I could just run a NAT server wherever and I could have clients connect into it like over a LAN and I could kind of pretend like it's part of the larger system except that you're getting all of the performance of things kind of working over a LAN. And so you get this kind of like, you don't have to change any of the config of your applications or anything. You just point it to a different server and everything just kind of magically works. It figures out, you know, where to route everything and stuff like that. And so what I'm gonna show off today is I'm gonna, you know, quit this AI program and I'm gonna say make run leaf. And this is going to run on a leaf node that I actually need to boot up. So I'm gonna take my NAT server that I fired up and I'm actually just gonna pass it a config saying leaf.comp. And what this is going to do is it's going to run a NAT server and it's gonna create a connection into the cloud and that way it's kind of bridged the two systems together. So I have a local system that's running and that connection could be severed from the cloud and everything connected to it can still operate as normal, right? We can even replicate data which is what we're exactly doing here. We can take data stores and we can say, hey, replicate it here or store and forward it or anything like that. But we have this leaf node that's now running on my laptop and since this is all kind of working on a VLAN of sorts we'll get this thing connected over to the leaf node. So I'm going to say make run leaf and you could see that that thing connected into 192.168.3.1 and now it's running. Now the cool part about all this is I'm gonna switch over to my local just because of TLS being annoying and I'll refresh this page. And then I'm going to run this object detection model again if I can. There we go and it's actually a lot faster and more responsive. It's running at 60 frames a second and that's because I'm dealing with one millisecond of RTT versus 30 milliseconds. So this is now a much better edge implementation because everything is running at the edge. If the internet goes down, it's still running which is awesome and I'm getting a lot better response time. And this is where NATS really comes into play. Oh, by the way, let me give you guys a little bit of surprise. Even though this is all running locally here, you could still hit that view button and you should still get all of that stuff. And the reason being is all that stuff is being forwarded to the cloud for you. And that's like the beauty of NATS is I didn't have to change how these applications work. I just had to change what servers they were connecting to and they all figured it out. And that's just the really special thing about NATS and why we think it's really great for the edge. Okay, I'll close with one more thing which is what kind of brave soul do we have to share their webcam? I'm gonna hit this handoff button and the first person to accept it is going to share their webcam. All right, ready? Woof. Woof. See it. Yeah. One, two, three. Okay. Now, I don't know who got it, but whoever did claim it, you should see a little camera at the top of your thing. Go ahead and click that camera button, turn on your webcam and then hit that object detection and we're gonna see if we can see you. Maybe. Did anybody see it, by the way? No. Oh, we got somebody. Okay, awesome. Awesome, awesome. You get a shirt. Yeah, you definitely get a shirt. We'll give you a shirt afterward. So this is the power of NATS, especially at the edge, but it works for cloud. It works across clouds. It works across geos. We think it's a fantastic technology for this stuff. I wish I could talk about this all day, but I think the demo speaks for itself and we are now ready to take questions. Any questions? There's a mic at the middle. Oh yeah, sorry. If you do have a question, feel free to just jump up to that mic. And if we need to form a line, that's fine too. If you wanna shout, we'll repeat the question. Yeah. Did you like the demo? Nice. Okay, we got some folks coming up for questions. Do you guys have plans for adding schema validation or any other sort of like, you know, specific message formats that you wanna keep with topics? Is that a planned functionality? Yeah, so schema validation has been something that we've been, we've talked about for a while internally. We've got Synadia and we've even built a couple like internal demos about it. It's something that we wanna, if we do like officially support it in some way, shape or form that we'd wanna get right because there's a huge like divergence between a small team wanting to like, you know, have some durability versus a very large organization wanting to have like governance. And we wanted to make sure that we kind of got that Goldilocks right. Yeah, there is an on that point. So there's gonna be likely in future release not promising 211 as a next major minor release but there's some intention to provide in that server formally basically a call out mechanism on ingest of a message and you can basically write your own thing to attach in that hot path. And that's the extent of it. So again, to Jeremy's point, it's like it kind of depends on what you wanna do and how complex, you know, the model should be and how much if you need to incorporate governance and things like that. But I think one of the things that we landed into 10 that people have been asking for for a long time was off call out, which is an extension point. It doesn't, you know, give you all of these IDP integrations out of the box because that varies greatly. But this extension model is very interesting going forward with the net server. So we're constantly considering like these particular use cases that you could be like, Hey, I wanna implement this thing. I'm choosing to kind of integrate, use this extension and then you get that functionality. Does that fit into the idea of services that you're introducing or are you kind of thinking those separate? So that would be more of like the framework that you use to maybe implement that extension point and then you get the observability and all that. Yep. Well, thank you. Yeah. Two questions. First of all, can we run this demo? Is it open source? It can be. That'd be awesome. Let me clean it up and run it. And yeah, we should be able to, it's pretty easy to run. Second. Second, I've heard people talk about routing. I feel like for a PubSub system which I understand like this is more than just PubSub, right? How does routing work within the net server? And maybe that's too complicated to talk about here. Yeah. I mean, just a quick overview is that like the routing mechanisms for nets are mostly built around the fact that it's a highly distributed system. Meaning it can cross multiple geos. We can have not only clusters, but we can have super clusters of those. And so there's a bunch of logic built in and out server around routing to, for instance, if you had a microservice somewhere, how do we know that when you make a request we can bring it to the closest responder, right? And so there's a lot that goes on in the routing there to kind of, you know, I showed that load balancing kind of demo. How do we find who the closest one is? And one of the principles around Nats is kind of location transparency. Meaning that an application doesn't need to know where the thing is that they're talking to. They just need to know what they want to talk about. And then Nats kind of handles the optimization path for you via routing. And extending that a little bit. So internally when you form a Nats cluster there's also the notion of a super cluster which is like bridging two clusters together essentially and you can keep doing that and the leaf nodes is extension points. Internally there's this, it's called an interest graph that is basically constructed and propagated. They use gossiping across all the servers to be able to say, hey, a client just connected to me and is now showing interest as a subscription. I'm gonna gossip that information across the entire cluster and then any publishers that are connected over that subject that server now knows, oh, there's a client over here that I need to go deliver it to. In a very simplistic view, the demo that we showed when I switched over to a leaf node, all of the traffic was happening inside of that leaf node but you all presented interest. Yeah, you showed interest in the thumbnails and the object detection and so that leaf node then forwarded all of that information to the cloud. So all that happens implicitly, that's sort of the magic there, which is quite nice. All right, let's take a couple more questions because I know we're one, we have one more minute. Hi, I wanted to ask a little about the persistence modes that are supported here because you did mention that you have, I guess, a specific leaf type deployment or a leaf mode per se for a given node and so I'm guessing you're gonna have also configurable persistence based on the role of the server. Yeah, the high-level overview because I know we only have a minute but JetStream is the persistence model that's built on top of NATS and it's extremely flexible to the point where you can, we've had, you know, we have plenty of customers that are replacing Kafka, that are replacing RabbitMQ, that are replacing Redis, that are replacing MinIO and S3 all through kind of a single abstraction because it's just that flexible enough. So I think key value, object stores, data streams, you know, logs, durable queues, all of those things can be modeled inside of that single abstraction of JetStream. We have multiple ways that folks can pick things off of that stream and consume them and so that's the high-level overview. The last thing I will say is the huge advantage that JetStream has over a lot of those other technologies is that you can take a stream and you can replicate it anywhere. You can move it. You can move it like mid-flight while stuff is consuming from it. You can, you know, mux like thousands of them into one big stream in the cloud or you can demux one big stream onto a bunch of different edge devices, store and forward them. It's all flexible and very much tied into the flexible topology. You're saying you can configure the server to do some kind of, I guess, mapping or redirection or mirroring of... Yeah, you can tell JetStream to basically mirror this stream somewhere. We can follow up afterwards as well. We have to stop, but thank you all for attending and for watching our crazy demo. We're going to be up front if you guys want to talk more. We'd love to chat. So thank you so much for attending. Appreciate it. Thank you.