 Hello everybody. Welcome to the NATS presentation. I'm really happy that you are so many of you. It was not easy to find the place. This talk is about state of NATS, about the core persistence, materialized views. I will walk through some basics of NATS with you and also update you on the new things. My name is Tomasz and I'm an engineer at Synadia. Synadia is a company behind NATS. I'm a maintainer of the last client mainly and also contributing to the other repositories. And Synadia is a company that's, as I said, behind NATS and is having a NATS as a service and most of the contributors actually at the company. What will be the agenda? First of all, as I said, I want to do some introduction to NATS for those of you who are maybe heard of it or not yet and want to not be out of the context when I will get on the features. Then we'll get through the new features and we'll do some hopefully successful demo of chosen features which are key value storage and object storage. So, before I start, how many of you heard about NATS? That's impressive. How many of you use NATS in production? Those are really impressive numbers. And actually, this is something I wanted to talk about for a moment. We're just not part of my agenda, but I think it's really interesting and impressive for me that when we are right here, the amount of people that reached us, the companies that we met that are using NATS on the pavilions, like half of them were at the same stage of doing POCs or knowing about it or having it in production, that that part of NATS that it just was downward from the community actually, the adoption that came is something that just I was shocked and as positively as I can be. And that's what made us really happy and really impressed those hands. Reason tells a lot. But okay, so I feel now it's a little pointless to talk about what is NATS if most of you use it in production. So, let's go through it anyway. So, basically, it's performance, simplicity, security and availability. It's the very simple thing that is at the same time very performant and very easy to secure to the point you could call it like zero trust and properly secured. In addition, for last one year, especially more than one year, when JetStream arrived, I think that it improved a lot in terms of the feature coverage because it's not only pops up if it was, it had NATS streaming, but it was not as simple as without it. And with JetStream built in the NATS server, which is 15 megabytes binary, we get back the simplicity with all the features that we needed for the persistence. And it runs anywhere because it's almost anywhere because it's written in Go, so whenever you can compile Go, it will run there. And it's very small, as I said, 15 megabytes. And again, getting back to the community part is, as you can see, the numbers are just numbers, but they're growing increasingly fast. When I last updated this part of slides because this part I reuse, I had to bump the Slack members' numbers by 1,000. And I was like, whoa, that's really a lot. And we see it, right, on Slack that it's incredibly active community that sometimes as a scenario employees and contributors, we just, I'm doing something, I see a question. And before I managed to answer it, it's somebody else from community answer it. And it's really incredible and just something I wanted to share. And thank you all because a lot of you are contributing to those answers and help. And it's just, the adoption is because of that from the open source community, it's just growing and pleasant for anybody I talk to how they are feeling with NAPS. So thank you all for that. So who is using NAPS? You basically. I saw all the hands up. But there are lots of actual companies, very big ones, like Mastercard. I will not name them all because we could spend quite some time. So it's not only open source adoption, it's also the actual production grade on huge scale. Sometimes the clusters are super cluster span across the globe. It's really something impressive and it's growing. Okay, but yeah, the overview, I will get it a little shorter if you are most of you use NAPS, but to get it the baseline for everybody also goes online. The things are, it's location independence. So if you know the address where the NAPS cluster or server is, you're fine. Then you just need subjects to communicate to publish, to communicate with other services. At core, it pops up. So at the most ones, fire and forget. Core also has request reply patterns. So you can have some kind of synchronous communication when you're not that independent then because the reply has to be available to return the message. But it's possible in core. But then streams come in, of course, which are based, were based on that streaming before, now they're on just stream. And just stream is the new persistence layer which is built in NAT server. So it's not a separate site or anything like that. So right now we have already three patterns for communication, three delivery guarantees. It's at most ones, if you're using the core NATs, at least once with standard jet stream and exactly once if you do a little more stuff with jet stream. It's very simple, but little less efficient. Security is a very important thing. The ability to make it secure from almost the one is not something that you play with NATs. And after some time you realize, well, now security. So we start almost over again with a lot of things. You have so many ways, centralized, decentralized way of securing things that it really makes it possible that the POCs are TLS-enabled and secured from the one, but from very early. Descentralization is a big thing. And I will not talk about it now because I have a few slides about it later. The same for global scale, which NATs with all the topologies allows. And yeah, all in that in 15 megabytes binary, which make it possible to be used in really many scenarios like IoT, small edge devices, or whatever can run actually NATs because it's so lightweight. So yeah, here are some of the deployment models. It's mostly anywhere when we run it on Raspberry Pi, on VMs on Bermeter, Docker, Kubernetes. So what I mentioned that architecture is different, you can pick. The simplest one is just one NAT server. That's probably where most of our users start, just run the server and have it done. Though the fact that it's 15 megabytes makes it really nice for, even for testing, when you can test, it's so lightweight that you can test it against the actual server that you spin in your tests and run it against. And you can run thousands of them on your machine and run the test against actual NAT server, not some mocks. What usually people end up is NAT cluster. NAT cluster for consistency, sorry, for high availability and horizontal scalability. The next step is super clusters, which are clusters connected through the gateways. And what the gateways does is they limit the chattiness. So even you have clusters spun up across one cluster here, one cluster in different regions, different cloud provider, you don't want them to be a static, typical clusters of whatever software you'd like to. So what the gateways does, they just limit it to the minimum that is needed to communicate, which makes it really nice for big deployments, which are not blown out by the all the communication that's happening. And the last one, and what should not list, is the super clusters with leaf nodes. It could be cluster or super cluster with leaf nodes. And leaf nodes are a feature that there are NAT servers running, that can be running also as clusters. And that don't have to have, one of the use cases, a constant link with, for example, the cluster. So they're great for IoT applications. So you can have a leaf node sitting on the edge device in the factory and have the cluster in the cloud that, for example, aggregates all the data. And then if the network is down, which is pretty often for IoT factory floors, which are in multiple locations, et cetera, the state when the link is up will synchronize whenever it's available. And then the leaf node is able to function and serve all the services, all the features, as the connection is down. And I think that's some really great thing in that. So let's get to the new features. Some things are around security. So encryption and REST and OST supports, so we can revoke the public case. Next one is monitoring and management. We put some efforts in quite a lot of efforts in it, not only for NQTT and WebSockets. That's also important. But some things like small things, but valuable, like being able to, the errors are being locked, the errors from health. It's especially useful in Kubernetes setups where that didn't give you much, if you saw this error and you didn't have the lock, what's happening. And having more things fit into the health, it also helps. Very interesting thing. We could talk for a longer time, but for now, just a sneak peek what it is. Some way of partitioning, though not the scary one, it's pretty simple actually. And what it does, it's deterministically hashes one of the tokens in the subject, token being the new orders or customer ID. And hashes this and gives you another token at the end of your, for example, at the end, because you can form it as you like. And that enables you deterministic hashing that allows you to split really nicely the traffic. And you don't lose any typical jet stream and nuts features with it. So this is basically built up on the subject mapping feature of nuts. We've added some small, maybe not magic mechanism from our side that allows doing the hashing. As this is a KubeCon, it would be really not good not to mention that a lot of work was put into the health charts, the official support of health charts. And right now, I think that most of the things can be configured via the health charts. So you can set up authentication, all the accounts, if you want to, how the cluster is formed, what are the storage settings, etc. And we're getting some good response that it did improve. And I think that's important. So we get much closer to Kubernetes to being easily run on Kubernetes, because we think it's really important so that as cloud-native technology, it should run really well on Kubernetes. And from jet stream, a lot of small and big things. You can now limit, set limits, for example, on accounts of different kinds. The pool consumers were, a lot of work was put into pool consumers. The FMRL pool consumers were added. So the consumers that are only there for jet stream when you subscribe to them. So the pool consumers become way more performant. And it's also inactive threshold. So you can set for how long your FMRL consumer will live after you're done with it. So you don't have to pick if it's durable. So it stays forever until deleted or it will die whenever you are subscribed. Now you have something in between, which is very important. Back-offs. So when you're sending a knock, so you want to say, I will not acknowledge this message. Presend me it again soon. Now you can define when the message will be sent again. And also you can specify the back-off array of times. And you say, okay, so whenever acknowledge fails, so the next one will be sent in then, then, then. So basically, you can have a proper back-off, however, back-off you would like to have in rescinding those requests for acknowledgement and rescinding the messages. A lot of work also was put into, I'll gather those into the fact that you can now tag, for example, a stream. And so a stream will be put on a specific location or a cluster with specific tag. You can also have alternates of streams. So you can pick from the closest stream for you, for example, and get messages from it. So it allows us to reduce the latency if, for example, the publisher put the, the consumer was pretty, the stream was far away, but now we have options to improve the latency a lot if you like to. Sealed streams is something that you can set the stream to be sealed, which means it cannot be, you can read it, but it's only, basically, read only, you cannot put more messages into it. And two more important, most important features, I believe, is key value storage and object storage, key value and object store, which I will demo in a moment. Yeah, actually I will demo them now, hopefully. So for this time, for a change, I'll be using Rust client. I think how many of you are using Rust? More than I expected, actually, because, you know, being the most loved language, always picked on the Stack Overflow's most loved language, also, I think, least known at the same time. But I think that it's for the demo, for you, for those who are, whatever, other language guides, it's still, it should be okay, because the API is, for this, at least, it's pretty simple. So we'll start with connecting to Nuts itself. So for this question, Mark, just to let you know, it does the, how many of you are Go developers? Oh, so this is, you don't have to write if error is not nil, then return. This is by the, this is for you. This, this is, this is what it does. Now we'll create a JetStream, it's not a connection, it's just a context, so you can set domain and different things there. So we use Nuts, JetStream, new, and pass the Nuts connection. It does not have an error. And now we'll create, I'll start with key value. And it's not key value config, and there's no way I want it. Okay. So any of you already use key value or object store? Something new to show, finally. You don't know. So there are a few things you can set in key value, bucket, which is just a name. And just for you to know key value and object store is, all it does, it just built on JetStream. So underneath, for those of you who are proficient with JetStream, when you pick into the streams that were created while using key value and object store, you just see streams and all the consumers underneath that are handling all the stuff. We just had to improve JetStream a little to make it efficient. So let's create a bucket. Let's call it, I don't know, pets or things. It doesn't matter, really. Description is nothing important. The max size, you can specify the maximum size of single key value per. You can specify history. So how many values will be kept for given key? All the older one will be discarded. Max H, basically this is TTL. You can set that after one second or one minute, the keys that the TLO has been reached, they will just be deleted. We'll not jeopardize our demo with this. Max bytes, max number of the stream itself for the given key value, storages, file or memory and replicas. It's just number of replicas of the stream. What's interesting is I got a question recently how the state is being handled, but actually, if you set three replicas, it behaves exactly as a typical JetStream. It doesn't matter, you will just get, when you inserted the new value, it's there for all the replicas when you get acknowledgement for this. So that's really fun. Okay, now let's a little Rust magic. Okay, so this is just, this fills all the data except Rust. Let's not talk about strings in Rust, okay? Just skip this. It makes this slide a string. It's a really good read to read about how Rust can do strings because every language has to do it and it's not easy. Here it's just more exposed to the user for some reasons, but we'll not get into that now. So right now, let's see if it works. Let's run this. Okay, we didn't fail this step. And now what we can do is we can actually, what we'll do, what how we called it, things. Okay, so you can watch over the key values. So that's what I'm doing exactly now. In the background, this is not CLI. So not CLI supports both key value and object store. I really recommend key value, sorry, CLI for playing with nuts. Okay, so first thing first, let's do the most obvious thing. We want to put things, let's go, let's go. All right, sorry, I'm not very creative right now. Let's value. Okay, let's run it. And what you see here on the right, that the watcher printed the change that happened in the key value underneath this is just again, just stream. What we can do else with it? We can, for example, this returns revision, I guess. If we wanted for later use. So what else we can do is we can get the value and we just pass the name of the key. We don't have to pass the name of the bucket because you're operating on the, in the context of this bucket. So this question marks handle's error, but what I would actually do, I will match against, this is pattern matching in Rust, because there are two options what can happen. There are no nils in Rust, so either we'll get a key or there is no key, no value for a given key. So if there is a key, let's say value and then we're printing it. No, I have to, because it's robot this way or it's nothing. This is for those who use any languages that have optionals that's basically it. One of the reasons I didn't want to do it in Go because when I tried the demo in Go, I had no pointer. So I thought that it would be good to not have them. Okay, so we have that here and if we run it, we should see, yeah, as you see, we see the value. The okay is nothing relevant for now because it's another wrapper. It doesn't matter. Okay, so we just very easily put a value, get a value, and actually being able to watch it. The watch thing, I will not show it now because it's just an API that enables you to do the same in the application. So let's skip it. What we can do also is update. So as you see in line 14, I have a revision I didn't use because I want to show you now this, which allows us to say, for example, thing and the value is whatever, sorry. And the revision is this rev. And what it does, it will only update the key if the last revision is this that we passed. So it allows you to handle the cases when other services might put the keys in this key value store. And this allows you to know you didn't overwrite some values, et cetera. So it's pretty useful. If I write it correctly, I did. Okay. Next thing, what we can do is that we can delete the key. Oh, we can do history, but no, we'll not do it. Let's delete the key. And this, I think, will be interesting because what it does, it's not actually physically deleting the key, but as you see, we get a new event, delete. And if we look at the history, I set the history to 10, so that's why I can now watch the history. It's thing. Really, I call it things with think inside. Okay. So as you can see here, the history is there. And the last entry, it does not have any value and it says it's delete. So if now I have to delete the put here, because it will, okay, if now I will try to get value, oh no. I didn't think about this. Now, fine. I don't need this. I don't need even this right now. If I will do this, you see, I get no value because the values are still in the stream, but it's fine. The last value is delete, so you will just not get this value. We can put more values than afterwards or we can retrieve the old values. We are watching the history, but you're explicitly telling, I want to review the whole history and I want to take the old value and get it back again or do whatever you want to. So that's really nice for those who need some audit look, for example, for key value, especially for financial sector, for many sectors, but for financial, that you can get back the values either for audit, other for actually application logic if you like to. But if you like to remove everything, then there is perch, which that's exactly that. So if you will run, I have obviously commented this because it will error. It will be fine, but never mind. So if we perch this and now see the history, all the values were deleted and the marker perch is put on. So if you do this, then the values are done. You cannot retrieve them. And that's basically the basic functionality of key value. It's not something very advanced in how key values are, but it's very nice for those who want some kind of key value storage and don't want to spin up. I don't know whatever key value at CD, Redis or whatever, and just want some basic functionality. And with this, you just have this key value for free if you're using nuts with just stream. And actually it's, I said it's pretty simple, but at the same time, there are guys, you can see it that, for example, okay, 3S is now able to, by contribution to use key value from nuts as a state for Kubernetes. So it's functional enough that you can do, you can have Kubernetes with nuts as a state. Okay. So this was for the key value. Now let's delete this before it hits okay. So what about the object store? Let's call it store create object store nuts object config. Okay. So the same story description max age just basically the TTL files storage and replicas. Oops. So let's just call it string. Probably yes. No, if. Okay. So let's see if it works maybe. No, it will not work because I didn't do the rust magic and now V magic failed. Okay. Okay. So now let's put, no, it will not work yet. Let's run this. Okay. And now we can watch for the file store, watch storage. Okay. And now let's put in some values. First we have to have something to put in there. So usually the question is what if you want to use key value storage, but I want to have big files. Yeah. In just you can configure the storage size for a single message, but going above a few megabytes or a megabyte, it's getting not that performant and it's not a good idea. So we thought that, okay. So if you want to have bigger files, that's still possible. We just have to chunk this bytes into smaller pieces. So that's exactly what object store actually does. It just chunks that those messages in those bytes into smaller pieces being single messages. So let's have a file here. DGFS file. Yes. Open. Okay. I don't remember how the files goes. Test file. Again, very creative. Okay. So this is, I'm not reading the file. I'm just creating a reader. The same can be done in in go. And right now I'm putting the file here. So we don't, you know, go through the IOT, get the file and then put it back. And now not objects or met object meta. And I don't remember what has to be here, but that's fine. Okay. Let's give you a name. Okay. And we have to probably do this anyway. Okay. In Rust files, things are all variables immutable by default. So we have to make them explicitly immutable. And this is just reference. I will not get into this because it would be a bit long. Okay. Yeah. So that's fine. So we're just creating, we've put creating a new meta describes how the file is named. We can have some other stuff there. And we're putting just a reader there. So this should put the file in. Compiling, compiling. Yeah. And you see, I again have a watcher. So it's still, I'm able to watch the changes that are happening. And the file is now chunked into smaller pieces, basically. And object store API is pretty simple. So what else we can do? We can delete it. We can maybe before deleting it, I will show you also object. So we can view all the object stores. And we can view the files in this object store. We can get them, of course, et cetera. All the operations typically that you do over some kind of storage. And we can delete the message. But maybe before deleting it, because it will be too late. We can also seal it. And what it does, it's when I will seal it, because I can put the same different files overwrite the files, for example, right? Create many files in object storage with different names or overwrite them. And what this allows us is that when you seal this given object storage, given bucket, no further changes are possible. So you're basically making an immutable object store. And yeah, that's basically how it works. So again, it's pretty, could be pretty useful for those who have to have control over how the files are handled. And of course, at this, we can delete it or project. And the last thing I wanted to show you did not work, probably because I sealed it. And I cannot delete it like this. Yeah, I will not get into this now. I will just see here. And now let's delete it, storage, deal, probably. Yeah. And basically the old data is now gone. So we created key values, manipulated key values, and showed you how to delete them, how to have non-destructive operations on the key value. All of that is backed up by JetStream. And then do the same over object store. So that would be it. Thank you very much for coming in so big audience. Very happy to have you here. And if you want to talk about it, just reach me out. I'm available here. And thank you very much.