 All right, good afternoon, everybody. I'm glad you joined us for the afternoon NAP session. I would be napping right now if I wasn't giving a talk, because I don't know, lunch does that to you. And it's like the last day of the conference. Well, thanks for coming to our talk. My name's Jeremy, and this is Todd. We'll introduce ourselves a little bit more, but we're going to be talking all about NAPs. It's going to be a really interactive session, so I'm not going to just be blowing through a bunch of slides. We're actually going to play with NAPs together. And hopefully, we'll get everybody connected in the room. So we'll do some quick intros. My name's Jeremy. I'm a long time gopher, and so I like write and go. I'm a developer, and I've worked in a ton of different industries. But I've also worked on a lot of projects in the Go community. So if there are any gophers in the crowd, you might recognize some of these projects. And currently, I'm working at a company called Sinadia, where we maintain the NAPs project, which is a CNCF project. There's also a lot of CNCF projects that use NAPs under the hood, which is great. So I'm sure you can go to the floor and see a bunch of different projects like Cosmonic and Intuit with a lot of their projects using NAPs under the hood. I recently moved back to engineering from a big product role that I was in for six years because I just missed building things. And so I'm really glad to be here and talking with you guys about NAPs, because not only is it a really fun technology to talk about, it's also a really fun technology to demo. So we're going to have a lot of fun today. Todd, you want to introduce yourself? Sure, yeah, thanks for everybody for being here. So yeah, Todd Beats, long time messaging and services and all the things in big enterprises, being an architect, high tech, retail, and with Sinadia for about a year where I get to indulge in my passion for NAPs, which is really exciting. Awesome, awesome. So friends, NAPs is an adaptive technology and we'll go more into that. But so is this talk, because I didn't know what to do for KubeCon, there's probably plenty of you that are using NAPs in production. I don't want you to raise your hands. Instead, I want everybody to take out their phones, take out their laptops, get real distracted. Go to this URL right over here. I'll give you guys just like a minute to go do that. It's going to have you fill out your name. If you do not want to be somewhat featured in this presentation, then don't just fill out a random handle or whatever, don't fill it out at all. But this is going to be our fun kind of audience participation part of the talk and we'll roll through it throughout the talk. So go to kubecon22.fercel.app. I'm going to roll on over here as well. You should see a screen like this. I'm going to put my name in here, Jeremy. I'm just going to fill out a couple of questions. This is going to be kind of our roadmap for the talk. I only have 35 minutes and I have way too many slides. So we're going to pick a subset of them and go through them based on where you guys are at as an audience. So I'm using NAPs in production. I probably want to hear about microservice architectures. And obviously the most important question, who is more likely to win an arm-belting match, is Jeremy, let's hope. All right, cool. So we have some results coming in, which is great to an overwhelming amount of folks who are here and have never heard of NAPs before, which is great. We're going to talk a lot about what NAPs is and how to use it. And it actually seems like a lot of people are interested in NAPs for microservices architectures, which is fantastic. We've been seeing this trend happening a lot with the companies that we've been working with that are using NAPs. And there's a lot of really cool tools, some that you just can't do with technologies, most technologies today. It looks like it's pretty close, wow, pretty close tie in the arm-belting match, but Todd is kind of inching ahead. We'll see what happens by the end of this conversation that we're having together. You guys will also see that there's some logs. We sent your survey information. This is all built on NAPs, by the way. NAPs isn't a front-end framework, of course. I'm using React and all kinds of other stuff over here. But a lot of this information stuff is built on NAPs. We're going to dive in and kind of check out this data that we just generated for ourselves. Before we do that, let's just talk a little bit about NAPs, since we have a bunch of newbies in the room. So at Senedia and at the core of the NAPs project and its design is this idea about rethinking connectivity. Meaning we get to rethink, or what would the world look like if we rethought some of the core fundamentals about what it means for machines to talk to each other and what it means for users to interact with machines. And if we rethought that, what kind of new tooling and ideas would come out of it. And so I want you guys to come into this with an open mind and know that there's maybe some new types of solutions and new types of thinking that we can adopt in order to connect everything better. And so multi-cloud and Edge, which has obviously been a huge topic here at this KubeCon, is really driving a massive transformation. And I want to talk about some of the ways that our current solutions don't really live up to some of the challenges that we have because our expectations are changing of where our data should be, of where our compute should be, being closer to who needs it. And that's basically the whole theme around Edge. And there's a whole ton of new problems that kind of come with that, where we're now going, okay, we have this big thing in our cloud, we've kind of figured that out or we've kind of gotten used to what that looks like. And now Edge is creating some new problems. So some of the limitations of today's technology is that we use DNS and host names in IP to discover machines, right? And because of that, we have kind of these one-to-one communication patterns. And for the most part, if we're using things like HTTP, we're doing this kind of like pull-based requests and reply semantics. Sure, we have some other technologies that are doing some streaming and some push and everything like that, but it's more layers on top of the same thing. We also have adopted this kind of perimeter-based security. Hey, I'm gonna just throw all my stuff in like a VPC and everything will be okay. And I think many of us who have done that have learned that there's actually new problems that come with that, especially when you're gonna go outside of the scope of your VPC, whether it's for policy reasons, whether it's for partnership reasons, or whether it just makes sense to be able to have separate networks in those cases. We've also gotten so used to location-dependent backends, right? Our typical kind of cloud approach is to say, cool, we have these horizontally scalable stateless services and then a centralized like database or backend to that. And as soon as you try to spread some of these stateless services closer to the requester, now you have the problem of latency back to the actual database or data store itself. And so these are some new problems that we have to solve when we start moving our compute. We also have to consider what does it look like for us to move and distribute our data. And again, all of this stuff is kind of built on this like HTTP one-to-one communication, making the assumption that machines can only talk to each other directly, right? It's almost like we have cell phones today and I can go out on Twitter and I could say, Twitter's kind of like this town hall where I can say, hey, I'm putting something out into the world. Whoever's interested, who's ever subscribed to me can go see that information and do what they want with it. But in a lot of ways, these technologies are kind of stuck back with phones and address books. And me remembering what people's phone numbers are, looking up in an address book and calling them one by one. No group calls, no Zoom, no more like collaboration, just one-to-one. So this is where I want to introduce NATS, right? So what is NATS? Now this can't be a KubeCon without a buzzword soup paragraph here. So NATS is an open source, high performance messaging system and connective fabric. And really what the project aims to do is to simplify the number of technologies that you use for your services to communicate, all while empowering you to build systems that are globally available, multi-cloud, multi-GO and highly adapted to change and scale. Now I know that seems like a lot, but what we've actually done with NATS is really hone in on what it looks like to communicate. We're not trying to handle orchestration or compute. A lot of those are solved problems and a lot of you guys have already adopted much of that. But we want to handle the communication bit, which we think gets kind of thrown under the rug a little bit and not really discussed. Okay, so what does NATS do? Well, one of the things that it solves for is this idea of location independent addressing, meaning you don't use DNS, you don't use IP addresses to be able to have your services communicate with each other, right? We can now use these kind of subject strings which I'll show you in a sec and they can communicate with each other and not have to know where these particular services are in the world. We also introduced this idea or we've implemented this longstanding idea of end-to-end communications, meaning we're not just necessarily talking about one-to-one, but the flexibility of everything in between, everything talking to everything and everything talking to some things. And I'll show you how that's kind of expressed inside of NATS. We are also push and pull based. So somebody can go request data, ask a question and get an answer back or they can have something pushed to them. I'm interested in a particular topic and I want that pushed to me. We also have this idea of decentralized and secure multi-tenancy. So we really wanted to design NATS to be truly multi-tenant, meaning that completely different organizations can coexist on a single NATS system and still have their security requirements met. And this works really, really well for organizations where you don't have to necessarily tune your NATS deployment to your specific use case. You deploy NATS in one way and it works for all sorts of different use cases, all sorts of different tenants that can do whatever they want. And lastly, and we'll touch on this a lot more, is this idea of intelligent persistence that if we store our messages in a globally ordered set of messages, we can actually do some very interesting things around how we replicate those across the globe and how we express different types of data structures, high level on top of that, things like key value stores and object stores, all from this atomic unit of a message being stored and persisted. Lastly, NATS operates very easily, mind you, on a global scale, meaning that you can run in multiple clouds, you can run in multiple data centers, multiple regions, and have all these things connected and acting as one system, punching through all these kinds of different networks in your public clouds, extending all the way to the edge where you can run this thing on a Raspberry Pi Zero and be able to have that operate as a single large organism. We have customers doing all kinds of really awesome things from constellations of leaf nodes and all kinds of crazy stuff to solve problems at the edge, and that just kind of proves out that NATS has this global scale that is actually not too bad to set up. Okay, so on the more practical side, before I kind of show you guys what NATS is, NATS is a simple client server architecture, meaning we have a NAT server that you pull up, that you can cluster and super cluster and all kinds of fun stuff, and that's written in Go, it's really easy to deploy, we have containers, we support Kubernetes, obviously, but we also support putting it on bare metal if you really wanted to. It's really about the flexibility here. And then on the client side, you can't solve for a big communication problem unless you support a bunch of different languages. So we have 40 plus client libraries to work in in all of the major languages that people use to then connect to NATS. So I wanted to show you guys kind of a NATS core demo and what it is. Again, I said NATS is really fun and easy to demo, and let's hope that that's true. We're gonna pray to the demo gods for a second and see if this conference wifi can really work it. So it looks like Todd's still, well, no, it looks like I've inched out Todd a little bit on the arm wrestling match. That's great. So let's go over what NATS core is. So NATS core is basic fire and forget message publishing. We actually have a CLI that we've built and go called NATS where you can interact with a NATS server as a client. And this is really fun for doing POCs and just kind of showing you the attributes of how NATS works. Now, can everybody see that all right? Are we good? Okay, I don't have to put out another survey if you can see the screen or anything. Okay. So the way NATS works is right now I have default context, which means I have to fire up a NATS server. NATS server is very hard to fire up. You just type NATS server and you get it running. Now to interact with the NATS server, NATS has kind of this core atomic unit of a message and you can publish and subscribe to these messages. But you could also do other patterns like request, reply, fan in, fan out, message persistence and streaming. So I'm gonna show you guys what it looks like to subscribe to a message. I'm just gonna say hello.star. This is subject-based addressing, meaning the star is a wild card for that particular token. And so I could put anything I want in there, but it's going to match to that particular subject. And then I'm going to reply, or I'm not gonna reply to anything. I'm just gonna subscribe to the hello subject. Over here I'm gonna say NATS pub hello.jrm and let's just throw a bunch of them in there. Count, I don't know, 1000. Whoops, I need to actually publish a payload. Now payloads in NATS is anything that you want to send to that message. And it could be anything, we're payload agnostic, meaning this could be a string, it could be JSON, Bson, Proto, whatever really you want, whatever kind of suits your fancy. We have no opinions about what the data format is. So I could send 1000 messages and it's actually pretty quick. To show you guys kind of the speed of NATS, because I say it's high performance, but I guess you could say anything's high performance nowadays, but we have a NATS bench command and I could say I want one subscriber and I want one publisher and I want to send it to the hello subject and messages and to get a good sample size. I think it might need one more zero. I don't know, that was too fast. Let's put another zero on there. There we go, okay. So as you guys can see, we're publishing, this is locally on my M1 MacBook, but it's 7.3 messages a second. It's pretty good. And again, you can cluster these for slightly more performance and everything and durability and things like that. NATS also has this idea of request reply. So I can say NATS reply, let me get rid of this, NATS reply to hello.star again. This time I'm just going to say reply with an echo, but I could reply with whatever I wanted to. This is kind of your typical HTTP request reply if you can think about that. And then I could say NATS request hello.jeremy and pass in a payload hi. And you could see I get a response back. And so even though this is all kind of pub sub under the hood and this is all asynchronous communication, you also get these very easy semantics both in the client SDKs and inside of the CLI to be able to build little bits of microservices on this because now you kind of have this mixed pattern development, which we'll talk about in a sec on when it comes to microservices. So that's kind of a high level demo of NATS core. Now let's jump into why NATS is good for microservices and then we'll close out with some NATS JetStream just because it's really cool. And I want everybody who's developing microservices to be able to kind of adopt something like a durable message queue or persistence or anything like that. Actually, before we move on from NATS core, does everybody still have their window up? By the way, if you just go back to that URL, you get it refreshed, it's local storage so it'll pop you right back up. Everybody whoop your phone or laptop out again, we'll have some fun here. So I'm gonna say NATS context select. You guys were all interacting with a NATS server in Austin, Texas, it's our demo server. So I'm going to actually log into this demo server and I'm gonna say just to kind of show you guys one more thing which is this kind of idea of queue groups and load balancing and everything like that as well as M2N communication. So I'm going to quickly say NATSCubeCon.rollcall and I'll publish that. And I'm gonna say replies, say how many replies I'm expecting back. Now, HTTP can't really do this. They can't say give me an endpoint and then give me all the people who care about this endpoint. That's just not how it works, right? But an M2N communication, you can. When you guys are logged into that, you're actually subscribed to this topic. And I can say zero and I'll just wait for a timeout. Whoops, what did I do wrong? Let's see. Let's go over here. See what I did wrong. Oh yeah, whoops, NATSCubeCon.rollcall, thank you, NATSCubeCon.request. CubeCon.rollcall, reply zero. Look it, I got everybody's name. Some people more than once. But yeah, everybody's name in here, which is pretty cool. And you probably saw it in your log that you've received that message and then you responded to it if you scroll down. I can go do that again just to show you guys, but we have a lot of people in the room that are logged in, which is pretty cool. And because it's going through that demo server, that's why you see that RTT is at like, sometimes one second, sometimes a millisecond. But if a NATS server is very close to you and this is the perfect example of what it looks like to be at the edge, these would actually be very much sub millisecond response times. I could do one more. Let's do an idea of load balancing. So you guys are all still subscribed to the same subject, but I want to load balance it between all of you guys. Now, typically you'd have to fire up a load balancer, put everybody behind a load balancer, but NATS actually does that automatically. I could say kubecon.lottery, because I only want one person, but I want it randomly chosen. I'm gonna get rid of this reply. We're gonna say, okay, M, whoever M is that's in the room, won the lottery. What do you see on your screen? You don't have to raise your hand or anything like that, but hopefully some confetti should fall from the screen. Congratulations, you've won the NATS lottery. Awesome. Cool, so that is a high-level overview of NATS, let's get back into NATS for microservices. So just as a high-level overview again of NATS core, fire and forget message publishing, it's really fast. We all saw that in the bench. And this idea of flexible subject-based addressing with wildcards, being able to kind of MUX and all of your different subjects and create a subject hierarchy that makes it really great for being able to discover what you need to do with that data. And again, payload agnostic. So whatever format you want to use, I just threw strings around here, but you could use whatever kind of format you want. As far as this kind of like multi-pattern approach, we support that request reply, publish and subscribe, fan in and fan out for typical pub sub, and then this load balancing, like I just said with the lottery, you have free load balancing via this idea called Q-groups. Okay, so now let's choose your adventure time and since everybody was interested in microservice architectures, let's talk a little bit about that. Now, when I think about what makes a good architecture, you know, when I was a developer, it almost always seemed to be like, oh yeah, it needed to be extensible, we needed to be able to maintain it, but there's all these other properties that really matter. And I feel like this resonates a lot with the Kubernetes and CNCF crew, is that there's so much more to application development and architectures than just making it maintainable, making it resilient and secure, observable, extensible and adaptive to change. And I wanna talk a little bit about how NAT approaches each of these. So in terms of resilience, we talked about this client server architecture. The coolest part is, and I didn't demo this, I really should have, but clients self-heal, meaning if a server goes offline, especially if you're in a clustered mode, the client already knows about the server's peers and automatically connects to that peer. It's done so very transparently for you. We actually fail over like entire clusters as well. I wish I had a million minutes for a demo, but I can't. I probably could show you guys at some point, if you come up to me or anything like that, how we fail over entire clusters. Like if an entire region goes down, all those clients move over automatically to the closest region based off of RTT. And that's just something that NAT's handles automatically for you. Servers also protect themselves at all costs. One of the challenges with building messaging systems is having bad, not bad actors, but slow consumers that eat up all of your resources. And so NAT servers are really trying to be a messaging server that protects itself at all costs to keep resiliency up. So it'll boot out slow applications, which we think is actually a really good trade-off because we don't want to eat up too many resources on the server side. Like I said, failover is automatic and load balancing comes for free. What about this idea of security? Well, we've created a security model in NAT that's what we call decentralized, meaning that we use some cryptography to be able to verify a trust chain, which means there's very little that needs to exist on the server. We don't store any private keys or anything like that. And we isolate these NAT's environments via accounts. So you can have an entire NAT's cluster for your organization, have multiple teams on that cluster, doing whatever the heck they want in their world, but also having the ability to send data over account boundaries, which is great. So sharing the ideas of streams and persistent streams and services between accounts, also enforcing resource limits for tenants. So very much in the world of Kubernetes saying, we want to operate this Kubernetes cluster as kind of a public utility for our organization, you can do the same with your NAT's cluster. And creating permissions for each service without server changes, which is really, really cool. So again, this is all cryptographically defined. You're able to mint a JWT that you just give to somebody and they can then have a secure way of using exactly what you want them to use without having to make any updates to some server user database or anything like that. Let's talk about location transparency. I talked about this a little bit inside of our, when we were talking about our subjects, but location transparency is a key characteristic of service oriented architectures. This is kind of where it came up with, when SOA was all the rage, the idea of location transparency really got big. But it's this idea of temporal decoupling and that consumers of a service don't really know what a service's location is until they locate it in a registry. So the ability to kind of have a late binding to a service without knowing its address or anything like that. And it gives you this flexibility that allows you to kind of swap out services, rebalance traffic, everything like that. All the things that you've really liked about a load balancer, but just baked into the construct itself. So because we have location transparency, you get free service discovery with subject-based addressing. You can easily move services between cloud providers given that they're stateless and they just kind of really care about NATs. And you automatically get routed to the closest responder. So if I'm asking a question on a particular topic and I'm over here in Detroit and the person who is closest to me is all the way in California, I'll get routed to that. But then if I stand up a service that's maybe in Ohio, I will automatically start getting responses from NAT service. I don't have to do any more reconfiguration. NAT handles all that for me as a broker. You also get this idea of traffic shaping and subject mapping, which I don't have time to cover, but it's a way for us to configure splitting traffic to different weights or rerouting traffic, which is great for like rollouts, betas, and being able to test things. Let's talk about observability. I might show this in a little while, but you can observe all traffic in real time. NAT says just a message broker. So I could say, I want to subscribe to everything. Give me everything, and then I can go put that in a data store somewhere, put it in a stream, and eventually transform it into something that's like Prometheus compatible, which we actually can do with a tool that we've created called Surveyor. So you can gather things like latency metrics on each of your services, which is really, really neat. And you could filter all these metrics, again, via Subject. So we're taking some of these constructs, like some of these atomic units, like Subject-based addressing, and we're kind of putting it everywhere. Like what if you could do your metrics this way and your observability this way? What if you could do your services this way or your streams? And so we're really trying to stretch a lot of the basic concepts of this into something that makes a lot more sense as a whole. We also have this idea in terms of extensibility of this like multi-pattern development. So again, our synchronous request reply, and then our asynchronous patterns of publish and subscribe. And if you want to persist your data, or you want to be able to catch up with your data, or you want to be able to do something like service mesh does, where it's auto-retries, you could do all of that with being able to persist those messages and create a policy that works for your particular use case, which is awesome. So I haven't talked about JetStream yet, but you also have the ability to create things like key values and object stores. I can be a nerd and talk about NATs all day long, but this is kind of what I think about when I think about NATs for microservice architectures. We want to really stretch beyond just request reply and going through proxies for everything to talk to each other, and instead say, can we embrace more patterns? And instead of saying, well, on the back end, we have this kind of persistent message bus, and on the front end, we just do GRPC everywhere. What if we mixed some of those, and we really tried to pull things off the shelf and use one system to broker all of this information? How much better would something like collecting observability metrics be? Okay, so I don't have time to look through all of these other slides, because that's not what we picked. But there's one thing that I want to close with, and then we can move on to questions, because I'm sure, I hope, some of you guys have some questions. So let's talk a little bit about JetStream. So NATs JetStream is a next-gen persistence layer that's built on top of NATs, and it allows you to have this temporal decoupling between publishers and subscribers, which basically means that if I'm publishing a message, that subscriber doesn't need to be online in order for the subscriber to eventually get it. And the way we solve that is by persisting the message, either in memory or on disk. Many of you who have used solutions like Kafka know very much all of the fun challenges that come with being able to do this at scale. One of the cool things about NATs is by default, it is globally scalable. So we get to piggyback off of a lot of the attributes that Core NATs brings us to be able to bring us this type of multi-tenant persistence. And so instead of talking to you guys about what JetStream is, I want to show you guys what JetStream is. And so you guys can visit this. NATs whiteboard.onrender.com room is kubcon22. If you guys are also staying on that page, I'll just navigate it to you automatically. If I can, let's see. I'll navigate you with NATs. Let's try that. So I will say NATs. See, I'm in demo. I'm gonna say NATs kubcon.navignavigate. And I'll paste this in. Nope, NATs. Pub. I don't remember that. There we go. Some of you guys should be already moved over there by now. Feel free to draw on this little whiteboard that we have. I want to see everybody's beautiful drawings. Here we go, cool, awesome. Now, the fun part about this is this is all in JetStream, meaning it's all persisted. You can refresh the page and it will load it all up again. This is again using our demo server in the whiteboard, but, sorry, our demo server in Texas. But one thing I can do is, like I said, I can say NATs sub and I can subscribe to this whole stream. I'm going to say, I think this is called whiteboard, or let's see, stream whiteboard. And this could give me all of the rooms. I don't want all of the rooms. So I can filter on that subject. I'd say whiteboard.cubecon22. I think that's what I called it. See how it works. That's pulling in all of the data locally for me. So I could put this on a local stream. I could replicate it into a different cluster or into a different leaf node. I wish I could talk about all of those things, but we're running out of time. But as you can see, I'm getting your guys drawings in real time as well, which is really cool. And this is kind of the ability to observe state. I'm pulling from a stream right now so I can kick it off and I can start it back at the beginning and recreate everything. Or you could just say, give me whatever's happening in real time. It doesn't really matter. There's a lot of options here, but as you could see, we're kind of graduating from this kind of core notion of what does it look like to just send a message around and then what does it look like to do that at scale? And now what does it look like to store those things at scale? And that's the very basics of NAT. So I hope you guys had fun drawing over this fun conference Wi-Fi. Of course, if I brought a server over here closer, we'd have much better performance, but it looks like some people are also deleting, which is great. So anyway, thank you guys so much for the time. I know we have about five minutes and I wanted to make sure I can answer any questions. So thank you so much. My beautiful co-host Todd will hand out mics to anybody who has questions. So this solution kind of falls within a whole bunch of actual other solutions that are out there. You mentioned Kafka. This does somewhat what Kafka does, right? I see some of the things you can do with Redis. It falls into that, right? MQTT protocol is almost identical to this. Where do you see NATs in that whole range? I mean, why would I use NATs over, say, Kafka? Yeah, so one of the big reasons we have folks adopting NATs, especially at the Edge, is because they have to put it on smaller and smaller devices. Who wants to run Kafka in 10,000 Raspberry Pi-like devices? They're gonna have a bad time. That's usually what we're seeing. So because NATs is small and it can run on very low resources, we're seeing that that use case is really great. And that's starting to expand to not just the Edge, but in places where often I see companies that are still using Kafka on the very back end, but having a scale Kafka like multi-Geo, multi-Cloud, everything like that is a challenge. So I'll often see them using NATs kind of on the front end of their stack and maybe putting it still in Kafka because they have a big legacy of data processing pipeline that they don't want to get rid of. But quite often, NATs is like for that scale. Okay, thank you. One other quick question. Common use case that at least I see in MSA development is a request queue. So publish a request and I got a gazillion subscribers, but I only want one subscriber to actually actively work that request. How do you do that in NATs? Yeah, so using streams, we have a concept called a work queue stream which is very much like just a durable queue and one thing picks it off at a time. Under the hood, they still use one consumer to kind of keep track of it, but multiple subscribers can interact with it. And so it's just a type that you configure. I didn't get to show how configurable streams are, but we have a really cool stream and consumer architecture that works really, really well for a bunch of different situations. So whether you're just trying to do retries or you're trying to do a big durable queue that scales or you're trying to put a key value store all over the world that responds under 10 milliseconds. Like you do all of that with the same kind of construct, just kind of tune differently. Yes. Yeah, we are using in NATs today. So how does it load balance if you are using a Kubernetes node board? Yeah, and so the question was, how do we load balance when we're using a Kubernetes node pool? Well, the fact of the matter is we're, so each one of those pods essentially would be a client that subscribes to a NAT instance, and NATs will be doing the load balancing. And so since NATs is getting the message and essentially brokering it for request reply, you get all of that load balancing for free. Okay, and one more. So do we have any maximum, or is there any TPS the NATs were able to support? What is the maximum TPS, yeah? That's highly dependent on your use case and everything like that, but what I like to say is, NATs easily pushes like 60 gigabit, or floods 60 gigabit links if you want it to, especially for Core NATs. On a per stream basis, we're seeing a lot of people get about like 250, 300,000 messages per second persisted and that gets punched down to the disk for every message. And, but for Core NATs for just like request reply, stuff that's not being persisted, I just benched it on my M1, it was like 7.1 million messages a second. I'm sure on Beef Your Hardware, you can push a lot more. Yeah, over here. Hi, my question is how message replay works? So what I mean is if subscriber is down, and then it comes back up. Is it ordered like Kafka or? Yeah, so you can order it. We also have a similar concept to like ordered partitions in Kafka, just slightly different, but same basic concept. But you, that's not out of the box, so you don't have to adopt that complexity if you don't want to. But you can configure a lot of what retries look like for messages. You have a retry window that you can configure on a per consumer basis, as well as a back off policy. So if you wanted to like actually manually configure per consumer, what kind of back off policy you get and how many retries you allow it to, then that's pretty easy to do. We also get this question a lot, like do you have a dead letter queue? We don't officially support a dead letter queue, but we do have advisories for when something maxes out on it, it's retries, so you can easily create your own dead letter queue. Part of the NETS philosophy is like, we wanna create a lot of these constructs, but we don't want to like adopt too much tech debt in the stack, so it feels like you're adopting a ton of stuff that you don't need. But it's easily to create one of those, yeah. So it's been compared to Kafka a lot, so what I, of course I have to ask is if there's any migration plan from using Kafka to using NETS, like any sort of automation that's been done to kind of streamline that migration, or is this like maybe a bit too difficult and more suited if you just have a greenfield project? Yeah, that's a great question, and so we do have a NETS Kafka connector, so you can kind of draw the line on where you wanna pick up some of the applications that are using NETS, and then some that are still using Kafka, and that's kind of where we try to guide people who are doing it either, they're replacing Kafka in a lot of ways, but there might be Kafka still living somewhere, and that's where we use that connector. We have one question over here. Over here? Oh, okay. Yeah, it could have. Yep. Yeah, I wrote that little React demo on the plane. It's actually super easy to write, it's like a hundred lines of React, and so it, but you're right. So as far as like exactly once guarantees, we do support that. So we support a bunch of different qualities of service and exactly once, obviously comes with a lot of trade-offs as well, because it involves a lot of acknowledge, acting and macking, and making sure the message really does get through, but that's all pretty easy to set up on when you're configuring a stream and its consumer. And by the way, we divorced streams and consumers a little bit from each other, so you can set up a stream and have like exactly once delivery for this consumer of this stream over here, and a different quality of service for this kind of, this consumer over here, and it's all still the same data set, which is really neat. Yes. So earlier you did like a navigate and a couple of other requests that you sent. Were those handled by like the React app that you wrote, or are they built in like the SDK stuff that handles this from that? Yeah, so those are all, that was just part of the React app. So I just, it's a very easy subscribe to this topic and then go do something. The client applications are very easy to write. If you're used to writing like anything, any sort of web application with handlers, it's very similar to that. Yeah, it's very easy to build reactive applications in that way. All right, I think we are at time. Feel free to come up to me if you guys have any more questions. I love talking about this stuff, but thank you so much for attending. Really appreciate it.