 Hello, and welcome to the NATS update presentation. My name is Jean-Oil Moin, and along with my colleague, Mathias Hanel, I'm going to give you an update of the varying tool of NATS, as well as show you a demonstration of some other things you can do with it. First of all, for those of you who are not already familiar with NATS, NATS is a complete production proven cloud native messaging system that is made for developers and operators who want to spend more time doing their job and less time worrying about how to do messaging. The DNA of NATS is performance, simplicity, security, and availability. It runs and can be deployed anywhere from the cloud to the edge and in between. NATS 2 is a complete messaging solution because it provides the best coverage of messaging features of any messaging solution. In order to illustrate that last point, let me go through a non-exhaustive list of features that are offered by NATS. The most basic functionality of NATS is published, subscribed, using subject-based addressing. NATS lets you do that with high speed, high fan out, and other 40 different client libraries in as many languages. NATS also allows you to do request reply with inbox messaging and curing durable subscribers. NATS also has streaming functionalities with persistence of the streams, multiple reply policies, exactly one's delivery, and it uses an optimized raft quorum mechanism. NATS also offers security from access control to encrypted transports to multi-level delegated administration and encryption at rest. NATS also has some pretty unique features, such as subject mapping and the ability to apply limits to the subjects being published. Also, NATS is extremely scalable. You can easily create clusters of servers and create clusters of clusters to be able to have multi-cluster global deployments offer, for example, multiple clouds, whether it's on cloud, on premise, at the age, or any hybrid thereof. It also provides service geo affinity. And finally, NATS has easy configuration, very lightweight servers that can run in very low-end hardware. It is easily embeddable and is open source and has a great community around it. Now, if you were to look at other messaging solutions out there, you would find that they only cover parts of all these features. And some of these features are also not covered by any. Now, in practice, what this means is that you end up with a batch work of various messaging systems covering different, somewhat overlapping sets of features that are completely independent from each other and therefore not integrated, whether it is for security, ability to exchange data between them, ability to apply policies, et cetera. Now, let's look at all the new features that were introduced in NATS too. As this timeline shows, there were a lot of features that were introduced recently with the release 2.2. I'm going to now go through some of them. Let's start with security. The security in NATS is your article and allows for delegated administration. NATS uses signed JSON web tokens to describe operators, accounts, and users. Operators create accounts. Accounts create users. And users present their credentials and permissions in the form of a signed JWT, such that there is no need to configure the servers with account or user data. The servers only need to know about operators and they use the operator public key to validate the trust chain of the JWT. This also means that the servers never need to know any of the private keys. You can therefore easily delegate administration of the creation of accounts and users by distributing the appropriate private keys to the right people. One thing to understand about security in the context of NATS is that, although you could manage individual and users with it most of the time because NATS is a middleware, users are actually applications. So you want to assign users to adapter instances or single tenant instances of services and message streams. Again, applications only need to present their JWT to the servers to be authenticated. Those JWTs are created by the account key holders and hold the definition of which subjects the user can publish and subscribe to. Accounts, they can be tenants or business units. Each account has an isolated subject name space. What is published on subject A in one account is normally not received on subject A on another account. You create accounts per single tenant environments. For each of the shared multi-tenant services and message streams, and for your control plane, the account JWTs, which are created by the operators, define the message routing between the accounts. You can specify which subjects are imported and exported and which services are imported and exported between accounts. I would like to now spend a little bit of time going over what is arguably the most interesting and important new feature that was introduced in NATS 2.2 a couple of months ago, JetStream. JetStream replaces Stann. Also sometimes known as NATStream. As a new streaming functionality of NATS. Stann is now legacy, and wow, though still maintained. If you do any new development in streaming with NATS, you should use NetStream. Then you should also consider migrating your existing NAT streaming applications to JetStream. JetStream takes upon lessons learned from Stann and builds a brand new implementation that offers many advantages over Stann. JetStream provides much better integration with Core NATs, for example. This allows you to provide a transition path for existing Core NATs application to streaming. JetStream is distributed and consistent. It uses a NATS optimized rough Quorum algorithm, which is much faster and requires no configuration compared to, for example, a Braxon-based Quorum algorithm. You can create JetStream clusters of one, three, or five servers, depending on whether you do not want any kind of fault learns, or want to be able to survive one or two servers going down at the same time. JetStream implements disaster recovery through mirroring between streams. JetStream supports file or memory storage and offers decoupled flow control between the publishers to a stream and the subscribers of that stream. JetStream is naturally integrated and benefits from NAT security system. This means, for example, you can easily control the import, export, and copying of streams between accounts. Streams can have multiple sources, meaning multiple subjects, including using wildcards or other streams. JetStream has three retention policies available. Either limits, meaning that messages are stored into the stream, only up until some limit is reached, at which point you could decide to discard either the oldest or the newest message in a stream in order to make room for the new message. Interests, meaning that the data is only kept in a string for as long as they are either durable or fMRO subscribers on that stream. And finally, working queue, which allows you to use a stream as a queue. You can impose per stream and per subject message limits, like size, for example. JetStream gives you many options when it comes to replay policies. You can decide to replay all of the message in a stream, the last message in a stream, the last message for each subject in a stream, just new messages, or starting at a sequence number, or by starting at a point in time. Finally, you can decide the speed at which you want the replay to happen. You can either decide to have instant replay policy, which means that the data is sent to you as quickly as you can consume it, or to go with the original replay policy, which means that the data is replayed to you at the same speed at which it was published in the first place. JetStream provides you with both push consumers, which are event-driven, and pull consumers, which are demand-driven, allow for batching, and make for very easy horizontal scaling of the processing of all the messages in a stream. You can have durable or ephemeral consumers, as well as explicit or automatic acknowledgements, meaning that the application can decide to explicitly acknowledge or let the library do it. You can have multiple kinds of acknowledgements in JetStream. You can send back an act to signal the correct processing of the message, and Mac to signal that you could not process the message, and it should not be re-delivered to you, and an in-progress acknowledgement, which simply means that you need more time to process the message and do not want the servers to try to resend that same message to yourself or another full consumer, because you're taking too long to process it. You can have explicit or all acknowledgements, meaning you explicitly acknowledge just a particular message in the stream, or all of the messages older than the message you are currently acknowledging. JetStream even offers exactly one's delivery, a functionality that is often missing from messaging systems. It does it by combining message deduplication at the source, allowing applications to insert a unique ID in a header field for the messages that they publish, and double acknowledgements on the receiving side, where the consumer of the message sends an acknowledgement back to the server, expressing the fact that it has successfully consumed the message, and then waits for a second acknowledgement back from the server, telling it that the server has indeed received that acknowledgement properly. JetStream also allows you to mirror streams, which has some pretty interesting implications besides the obvious disaster recovery. Because JetStream is integrated with NATs, all streams are easily observable. You can always subscribe and get a copy of all the messages that are going into the stream. JetStream also offers another functionality that is often missing, which is encryption at rest of the data being persisted. And last but not least, JetStream is like NAT's simple and easy to administer. It has low amounts of configuration and can be easily managed. And finally, one of the greatest features of JetStream is its speed. With JetStream, you can achieve some very high throughput, putting messages inside streams, and especially replaying messages from streams. Those rates can be orders of magnitudes faster than what you get out of other streaming solutions. Moving on from JetStream, I would now like to spend a little bit of time going over some of the unique functionalities of NATs. One of them is subject mapping. Subject mapping is relatively simple to understand. It's the ability to say that any message published on Subject Through will be rewritten to be on Subject Bar. The obvious interest of doing this is that it gives you administrative control over the subject namespace. For example, you can decide to map messages sent to service.food to service.food.v1 and then later to change that mapping to service.food.v2. Subject mapping also allows you to reorder tokens in a subject. So for example, you can define a mapping from bar.star.star to baz.the second token of the subject, that the first token. You can also do weight mappings for forms of traffic shaping. For example, you can decide to map 90% of the traffic on service.food to service.food.v1 and only 10% to service.food.v2. This allows you to do things such as empty testing, canary releases and even if you redirect some of the traffic to a subject nobody is listening to, introduce artificial message loss. You can always reload the changes to this mapping without any downtime on the servers. Finally, the mappings can be global or per account or it did can happen as you route messages between accounts. Another new thing that was introduced in NATS 2.2 that is worth going over is the NATS command line interface tool. This is a command line interface tool for interacting, monitoring and administering NATS on streams. You can use it for doing very basic NATS operations such as publish a message, listen to message published on a subject, translate send requests, run wait for replies, listen for requests and send replies. You can use it to view reports and information about servers, connections, streams and consumers. You can use it to listen to system events and view a lot of information about the details of the connections, account latencies, servers, etc. You can also use it to perform on-the-fly administration and monitoring of streams and stream consumers. You can not only create, delete streams and consumers but you can view information and reports on streams. You can also view messages inside streams, monitor streams and even administratively remove messages from streams. You can, as well, use the tool to backup and restore streams and even interact with a stream's RAV cluster such as, for example, for triggering a new leader election. The NATS command line interface tool also allows you to do key-value operations and run all kinds of benchmarks to measure the performance of NATS and JetStream in your target environment. You can create benchmarks of core NATS, publish and subscribe, request reply and, for JetStream, using sync or asynchronous publishers and consumers. One thing to remember about NATS command line interface tool is it has a cheat sheet built-in. You can use NATS cheat to get a generic cheat sheet or NATS cheat on a particular command such as, for example, NATS cheat bench to get examples and more information about what you can do with this particular command. And finally, as we transition to Matthias's part of the presentation, I want to spend a little bit of time introducing the last important feature that was introduced in NATS 2, which is the flexible deployment architecture. The NATS deployment architectures allow you to deploy on multiple data centers, multiple regions, multiple clouds, on-prem, on the edge, even if that edge is only partially connected or any combinations thereof. The simplest NAT deployment architecture is a single NAT server and clients connecting directly to it. Very quickly, you will want either fault tolerance and or scalability and want to create NATS clusters. NATS clusters are made up of any number of NATS servers that communicate with each other and service any number of client applications. Now a single cluster has limits. For example, you may not want to necessarily distribute a single cluster over various data centers or multiple cloud operators. For this, NATS has the concept of superclusters. Superclusters are made up of clusters of NATS servers that are connected together by gateway connections. Gateway connections route messages intelligently between clusters. They do not send messages across unless they really need to be sent across. One example of this is the fact that superclusters leverage geo-affinity for services. What I mean by this is that if you had client applications connected to a server and publishing requests on a particular subject, the supercluster is smart enough to know that if there is a listener currently servicing requests on this subject within a cluster, the request should be sent to that listener. Only in the case the cluster knows that there are no currently available consumer for those request messages within the cluster, do the NATS send the request over the gateway connection to other clusters, whether maybe in other instance of the service that will process those requests. This allows you to have both disaster recovery and the ability to leverage the locality when needed. Finally, NATS also has the concept of a leaf node. A leaf node is sort of an extension of a cluster that you can deploy wherever you need one. And now I'm going to let Matthias tell you more about leaf nodes and what you can do with them. Hello, my name is Matthias. I'm a computer engineer at Zenedia. I want to give you a brief demo of our adaptive edge architecture. Let's begin. While you can connect your applications directly to a NATS cluster or supercluster in a cloud, some applications might have additional requirements, such as the ability to communicate locally while internet connectivity is down. This is typically the case when the edge moves, think ship or car, when internet service is generally unreliable or when you have so many, say, stores that the sheer number times a small likelihood of an outage is bound to make a customer unhappy somewhere. Our solution for this is to connect NATS server as leaf nodes to your cluster or supercluster in the cloud. Applications will connect to leaf nodes, thus retaining the ability to communicate among each other once the leaf node connection is down. Leaf nodes themselves are lightweight and can run on a Raspberry Pi. To server in the cloud, they appear essentially as a client connection that proxies non-local traffic of their respective clients. I could set up a NATS supercluster as done in other longer running demos, but in the interest of time, I'll use NGS, which is a NATS supercluster hosted by Zenedia. NATS is built to be multi-tenant, operated as shared utility, so instead I focus on what a developer for the adaptive edge would need to do. Let me show the config, connecting a single server as leaf node to a supercluster. This is your basic JetStream-enabled leaf node configuration connected to NGS in the cloud. I'm referencing environment variables, so I can easily make the necessary adjustments to start this a second time on the same box. Let's start our leaf node. Starting this will result in a server listening on port 4222 with server name and domain name set to leaf1 and store directory set to JS store leaf1. To generate local traffic, I subscribe on a subject and publish once a second on it as well. Once a second. And then I do this a second time using slightly different values for my environment variables. And to keep traffic local and keep this demo shorter, I'm using a different subject name here as well. So subject.2 versus subject.1. Another requirement is to bridge internet down times by persisting important data and automatically uploaded once connectivity is back. Our solution for this is to have JetStream, our persistence layer enabled in the cloud, as well as in leaf nodes. Inside a stream, JetStream then records where local traffic you're interested in. A stream residing in the cloud is then set up to source from the ones in each leaf node. This causes the automatic download of messages as they are stored locally or in the event of an outage once connectivity is back. The important part about this is to specify domain inside the JetStream config block. This is what keeps the JetStream in this leaf node independent from the one in the cloud and makes it available from anywhere within your net's network under this name. Now I'm creating a stream in each domain using a pre-created config file for a stream without limits. My currently active context always connects me to the cloud, thus I provide a domain name to specify where I want the stream named buffer created in. So here, leave one and leave two. To overwrite the ones in the config file, I specify subject used in the respective domain. Subject.2, subject.1. Now we have created two streams with a limit space retention policy in the respective domains. For purposes of this demo, I set the limits to be unlimited. In the domain referencing the cloud, another stream is then sourcing from both these streams. Since I am sourcing from two streams named buffer, these are the names I provide. Everything else I'll be asked during the questionnaire. So yes, from a different domain. Leave one. Yes. Leave two. When looking at the stream report for streams in NGS, as well as the two domains created, you will see that the message count for the stream just created is roughly the sum of the other two, roughly because of timing when the request hit the server. So 217, 109, 109. So 218, but this request was made first. I'm also creating a durable push consumer that will give us everything in the stream just created. Our durable is called sourcedir, deliver policy all, explicit acknowledgement, replay policy instant, flow controlled. And now I'm consuming from that. We already retrieved all the messages. They're now coming in, essentially alternating because the stream is caught up and is getting the messages as they're being sent on a second-by-second basis from each leaf node. To also demonstrate that this can cope with an outage, I am quickly turning my Wi-Fi on and off. I'm using ping to demonstrate that I'm actually offline. And the Wi-Fi is off. So you can see that our publishers and subscribers are happily continuing directly connecting to leaf one. You can also see that the message count for the stream buffer is steadily increasing. And since the connection to NGS is down to servers, print error logs as well. Now I'm turning my Wi-Fi back on. There we go. We're online. While the leaf nodes were offline, we kept storing data in local streams. Yet on Reconnect, the data missed was copied over and our message counts should match up again. 362, 363, roughly, comes up to 722. And our consumer is receiving messages as well. This is it for this demo. I hope you found this useful and I'm looking forward to your questions.