 and hand it over to Derek to kick off today's presentation. Thank you, Kim. Welcome everyone. Good morning, good afternoon, good evening from wherever you are. Thanks for attending this webinar about a new way of thinking around connectivity. I think it's time for us to kind of think differently. We are in a whirlwind of technology these days as most people on the call know where technology's come and go at a furious pace. But I believe we're really into the beginning of the connected economy. We've moved on from the computer economy and most people believe we're in the data economy. But how that data and how services are accessed and securely accessed I believe is kind of the coming age and that's why I call it connected economy. And this is where the largest opportunity lives. We are moving into this connected economy and I believe it's something that's applicable for all the world's digital systems, services, and devices. Quick history on distributed systems. We've kind of created this problem right now in this complexity. There used to be just one computer and if you wanted it to go faster, you bought a faster computer. But systems are now global and they involve hundreds if not thousands of different components. And they're complex and sometimes expensive and slow. And I believe that this situation is getting worse. Throughout my career, what I've always been fascinated by is this pendulum motion of technology where we swing back and forth. And so I'm confident that we will swing back to simpler times that we're kind of trudging new ground and new technologies and new features to deliver better on the technology that we have. But we're in that phase right now where things feel like they're just becoming more and more complex, more expensive, slower, slower time, a little bit about how as we move into the connectivity economy, that different types of technologies, different choices could help reverse some of those trends. So in terms of connectivity, I believe this is the key. I really do. It's not necessarily about those data or the services or the compute, right? Which was the major factor when we were going through the cloud of revolution. But connectivity now today is the key. And we've seen this in other industries. This isn't something that's totally new. We saw this with the internet. We saw it with the global cellular networks as well. Mostly for humans, right? But what we don't have and what we lack in my opinion is as a technology that's a true utility type technology that can connect all the world digital systems, services and devices. I believe the connectivity needs to be ubiquitous. It needs to be a true utility and not a silo. And that's something that I think most people skip, which is that most technologies are silos. And we've seen this in other technologies as well, like databases where databases were a silo for a certain group or business organization and companies would have technologies. Even in companies that have said, oh, we're adopting this type of technology, you'll notice that there's multiple silos of these running. And there's usually an ongoing effort, a brave effort to say, hey, we should actually just create a utility that everyone can share. But the technologies usually aren't built to do that. And to be honest with you, when NAS began its life almost 10 years ago, it wasn't either. But these silo technologies just simply cannot fit into a true utility. I think it needs to be secure and isolated by default. I don't think these can be an afterthought similar to what happened with the web where we went probably to open up the world of e-commerce and banking and things like that. I believe things, although isolated by default, meaning Coke and Pepsi and UPS and FedEx can participate with the same infrastructure, there needs to be a way to securely share data streams and services. It should scale effortlessly, should have multiple patterns, both services and streams, data and event. And in my opinion, this technology should self-heal itself. It should be agile, multi-cloud, on-prem edge, and IoT applicable. Location transparency, access to service event streams from anywhere in the world, any type of deployment platform, any cloud provider, edge gateway, whatever that is, should not be defined or confined, I mean to deployment platforms and clouds. It should be observable from anywhere, meaning it shouldn't have to run in a certain area within a certain framework to then say, oh, I could observe my service. I understand latency, I understand where requests are coming, where the latency from the request versus the responder is. And this should just simply be available securely. It should be built on an open source technology. Again, we're part of the CNC app and we believe deeply in their charter. And we believe also in open ecosystems and decentralized and federated software as a service offerings, right? It cannot be controlled by one entity or one company. I think it needs to be federated by design at the beginning. So why is messaging important? Well, again, the number of moving pieces that exist today in distributed systems continues to go up. And what used to be something very simple as just create a socket connection from a client to a server has evolved into the world we are in today, which is vastly more complex. And the pressure on technologies that are actually connecting all of these different pieces continues to increase. And we can see that by the market responding with things like service mesh, where they're trying to make a rationale of the functions and features we need to actually bring all this complexity back into something that's easy to reason about and understand. We believe that multiple messaging patterns should be bundled into the same technology, right? Streams and services, services I ask a question and get an answer or request response. And of course, streams, whether it's an event stream or a data stream, these shouldn't be different technologies. There should be the notion of location transparency and not being tied to IPs or host names, things like that. Decoupled producers and consumers, they should not have an awareness of who they are outside of a security context. Meaning I should know that by getting a request, it's secured and authorized and I know that by default, but I don't have to know where they are or where they're coming from or what client language or if they're deployed in the same cloud. None of that in a modern system is really important. And again, extensible and open by default. And so what is NATS? For those who don't know, NATS is a simple and secure production proof and messaging technology for modern distributed systems. It was originally built to Power Cloud Foundry at VMware as a control plane and a telemetry system. Meaning, those streams of data coming off the system so we can understand what was going on and the command and control think of that as a services. I want to deploy something. I want to get an update on how much CPU this application is using. It has scalable services and streams. What I mean by scalable services is that you can run endpoints and to scale them up, you just run more of them. There's no load balancers, there's no sidecars, there's no proxies. As a matter of fact, one of the things that the team is very proud of is that applications that were written almost 10 years ago, NATS is a mature technology, can still run on a modern NATS 2.0 system today. It's actually been that backward compatible. And again, for a service to scale, you just run more of them. And if you want to scale it down, you run less of them. All of this is just built into the system and you get it for free. Should be extremely easy use for developers and operators. I should be able to develop an application on my workstation or my laptop and transition that to production with the least amount of change and friction, so to speak. Even from the very, very beginning days when NATS was built for Cloud Foundry, the notion of being highly resilient meaning it protects itself at all costs. It doesn't protect you, the user at all costs. And what I mean by that is that there's a lot of enterprise technology, silo based technologies that they call it the query of death. At least they used to call that for databases where a client application will ask the system to do something and it will bend over backwards to do that to the detriment of all the other users. And when you start looking at a technology as a utility technology, not a silo technology, you need to start thinking differently about how that technology reacts to inputs from different users. And for NATS, its core philosophy is it's highly resilient and protects itself at all costs. It's highly secure, extremely lightweight. You can run a full-blown server on a Raspberry Pi. Again, NATS has been around for almost a decade now. It has client support for over 32 and counting different programming languages. And it's a CNCF project with Kubernetes and Prometheus integrations and more to come. We're adding those as the CNCF has grown at such an amazing rate. There's new projects that are fascinating and interesting and we're looking at ways to work with those the best way we can. And again, originally built to power Cloud Foundry, but NATS is applicable not only in the cloud, multi-cloud, on-premise edge or IoT. It's subject-based routing, not IP. That's huge in my opinion. It's message-based, not packet or byte streams. It has, again, scalable services and streams with the same technology. It natively does end-to-end communications with queue delivery, not just one-to-one. Secure and isolated, decentralized by design. We'll talk a little bit more about what that means when you actually get to very, very large scales. Secure sharing, it understands the notion of exports of streams and services and imports. Imports is your dependencies, exports is your UI, essentially, your interface. It's self-healing and highly observable. And it's capable of being federated as a global utility, right, a global dial tone under a single URL that's just available anywhere in the world. But again, secured by default. So the CNCF landscape, again, this is an amazing organization and effort that is growing so fast that a lot of these landscapes are looking a little bit like an iChart. And, but we are on there. We're on the streams and messaging piece, as you can see. The way CNCF positions us is part of the streaming and messaging and puts GRPC in there. But obviously, for those outside the CNCF who are aware of Apache and Kafka, there's a lot of overlap as well between the streaming components of Kafka and what NATS can do in a lighter weight, you know, simpler type fashion. We have a very quickly growing community. And that started out again as a technology just for VMware and Cloud Foundry and Bosch at the time. But now as it's been opened up and been part of the CNCF, the ecosystem is growing extremely quickly. So about a year and a half ago, we undertook the notion of what's called NATS 2.0, which is a modern version of NATS. And again, an application built 10 years ago still runs exactly the same way untouched on NATS 2.0. But NATS 2.0 was an effort by a lot of the founding members of the technology to bring together something that could connect all the world's digital system services and devices. And that sounds extremely ambitious, and it is. But we also look back at the notion of what happened with things like the internet and the global cellular network where the opportunities that presented themselves just by having this massive connectivity were simply unforeseen. I don't think any of us in 1994 would have understood what the internet would look like and provide for everyone on a global basis today. And while there are 7.2 billion people, I think we've got almost 5 billion people connected, we're gonna be around 75 billion digital system services and devices by 2020. And so the problem is a lot different a lot, much more massive opportunity and also more of a massive challenge. For us, the 2.0 release was the largest feature release we've done since the original code base. But again, it's totally backward client compatibility. But it was designed to create a new way of thinking about NATS not as a server or as a broker or as a cluster of servers, but as a secure shared utility that could be global or within an enterprise or with both. NATS now solves those problems that scale through things like distributed security, multi-tenancy, true multi-tenancy, global networks and secure sharing of data. And allows new ways of architecting systems and service meshes and event and data streaming. We had one customer who actually, user actually talks about the notion of parts of NATS 2.0 and the way we did multi-tenancy allows them to easily say each microservice has its own account, which we'll talk about in a second or essentially a messaging sandbox, which they're free to do whatever they want to inside of that. And then when they're ready they can export to share with the world and they can import other people's services or streams, right? Again, as dependencies. So we released on June 10th, 2019. So actually NATS 2.0 has been around for a little while, which is good and it's gotten some great mileage already. One of the big things that we talked a little bit about was this notion of how do you move from a silo technology or single tenant to a multi-tenant technology? And it's non-trivial. It's not something you can bolt on at the end. And so the team took quite a hard look at how we would actually do this. What's nice was is that inside of the system there was this notion of the router, right? That given a message with a subject, it figured out where that thing was supposed to go. And so we concentrated specifically on that to say that is the level of multi-tenancy that we want such that as a user come in and authorizes themselves they're actually attached to one of these domains, these sandboxes, these containers for messaging so to speak. And that was how accounts were born. They're isolated, meaning Coke and Pepsi can exist on the exact same infrastructure and no messages will ever be seen by the other. It allows to bifurcate technology from business-driven use cases. The data silos that are created by design but not software limitations so to speak. But again, the design was isolated by default but very huge emphasis on allowing to securely share data between accounts, both streams and services. And it's a mutual agreement, kind of like Facebook friends, you know, I have to like you and you have to like me type stuff that is totally essentially secure as well. And it allows and permits data to flow across boundaries. Now, one of the things that's really interesting but important is that because these accounts, these containers for messaging, these sandboxes are your sandbox, they're your world. When you import something, you get to pick where it shows up. And so for those on the call who have dealt with messaging systems in a previous life, they might remember design sessions that might last for days or even weeks of trying to figure out the topology, the topic topology, the subject topology of how many tokens and who's gonna be where and please don't step on each other's feet. Which was very brittle, it was complex and it always failed eventually. Maybe a year later, two years later, someone accidentally forgot that, oh, Colin was using this subject and then we would use it. Accounts had and this multi-tency had a natural way of solving that problem. It's your own space and you control everything about it including when you import something in where it shows up. And so what happens is similar to people who came from the Java world and had very long method names and variable names and then we went to go which is a language that's very common for a lot of the cloud-based stuff and everything became very short, small variable names, very short method names. Nats2o allows you to do that with subjects. A lot of the deployments we're looking at now and we have two tokens like KV.get or NGS.usage or things like that. And that's again because of that one little piece of when you import, you still control your account space so you can put it wherever you want. You don't have to put it where the person who exported had it. So that makes sense hopefully. It's easy, secure and cost-effective, extremely cost-effective compared to the complexities that we're seeing right now in the industry. One Nats deployment for operators to manage, it's decentralized, organizations can self-manage, meaning they don't have to keep submitting help desks or requests tickets, say, can you please add a user? Can you change the user's permission? Can you revoke this user? All of that is managed by the account owners which we'll talk about. Services and Streams. So these are the basic of the patterns that we talk about. Nats is a public subscribed system and in the 90s that used to be really good and then sometimes it was bad and now it's supposed to be good again. As a core technology, it's key. But what it's enabling is essentially the notion of a stream. I can be publishing out data that other people might be interested in or a service. I have a listening endpoint that if you send me a request I'll actually send you a response. And so the basis of all modern systems for the most part are based on the communication patterns of either a service or a stream. And again, we understand and allow you to export services and streams, import them as you need them. Lots of different use cases for these. Mostly the streams can be thought of also as being observable services, right? You want to just observe a service and that's a stream that's flowing off of a service. And again, with Nats, the key point that I hope most people can take home is that in order to do that you already had it when you started. You're not now deploying with a sidecar or a proxy or a load balancer or a different framework or a different platform. On day one, when you wrote your app you had all of that built in from day one, that of pure observability. And again, that's based on that pure Pub-Sub technology underneath the coverage, which I think is important. One of the other things that's interesting about this is that almost the majority of changes to a modern Nats system are zero client configuration or API changes whatsoever. You're not changing anything about the client application. You don't even have to recompile the code or rerun it or do anything. And again, making sure that we eat our own dog food or drink our own champagne, we have applications that are eight plus years old that I wanted to make sure could run against a modern Nats system and they do. This is extremely interesting for those who actually understand messaging systems a little bit. With accounts, what it actually allowed us to do, and again, I wish we could have said, oh, we pre-thought about this, but we didn't. But with accounts, we said, oh, wait a minute, an individual server inside of a Nats system can have its own account that it can use to communicate now. So instead of everything being a protocol change for servers to communicate and coordinate amongst themselves, we just do the same thing that application developers do. We just send messages now. And so even a global super cluster, right, though all the servers are talking to themselves with just using messages through this notion of a system account. But that account is no different than any other accounts. They're all just the same. So they have authorization, authentication, permissioning, all that. And so we think that that's a very, very powerful primitive because we are doing the exact same things application developers will do with the system itself. So one of the things that we care deeply about with global deployments, if you're actually trying to say, hey, we have all these silos, now let's see if we can make a utility that the company shares, that maybe has a geo in the East Coast and one in Europe and one on the West Coast and one in Asia. There's a lot of things that the team was specifically trying to solve for in terms of how do we actually build one of these such that it's not difficult to scale, difficult to change, difficult to manage and monitor. I've been involved in a lot of systems where you'll get the system set up and it's so brittle or so fragile no one wants to change it after it's actually up and running. And there's massive dashboards that everyone has to stare at and page your rotations and all kinds of crazy stuff. And I just don't like that. I don't want, if the system can self heal itself, I don't want to be woken up in the middle of the night, right? I want it to be under, be able to provide an SLA and fix itself most of the time. And that's two O does that. We don't have to do configuration changes for most changes that we're doing. Both clients and the server self heal and talk to each other. So a client knows about the topology even if it wasn't in a DNS name or configured in the application. All of this is built in and you just get this for free. Clusters can dynamically grow and scale up and scale down. And again, clients are automatically made aware without any client application code. None of that is needed. And NETS is something that works extremely well inside of Kubernetes deployments. And we're getting a lot of usage and a lot of interest through quite a bit of POCs these days around how to do Kubernetes and multiple Kubernetes deployment and cluster those together. But NETS by itself is truly deployment cloud and geo-agnostic. It does not care. And it will put itself together and self-heal itself regardless of how it's been deployed, which we think is important long-term. Superclusters is our way of saying how do we actually get to a global footprint? Most, not all, but most Azure systems technologies do very well if they're clustered together in a very good network bandwidth and RTT type of a scenario. And as you what we call stretch the technology to long RTTs or sketching networks, problems persist. And we've seen amazing companies like, Googles of the world and the Facebooks of the world and others trying to solve these problems at global scale. And so from a communication standpoint, we needed to try to solve this problem as well. And so we created a whole new technology that allows clusters of clusters to be networked together through a novel spline-based technology. It's got a very interest or optimistic sense with interest crap pruning because the team collectively, we have over a hundred years of experience in these systems and at large scale, a lot of times just the chatter that's going back and forth on one of these systems actually brings the system down, even though there's actually not a lot of data moving on the system. And so we were aware of that problem and worked really hard to put something together. The other thing is that we've always had this notion subscribers can dynamically form a group. So Colin and I can dynamically listen to Fu and say we're gonna be part of the NATS group. And the system dynamically responds to that and starts delivering messages to both of us, not every message. So he gets one, I get one, things like that. An application that's built for that that was built 10 years ago can run on a super cluster now, but a super cluster is actually, the geo-distributed aware of these Q-subscribers. So it understands the closest one and the closest subgroups and picks those automatically. And if they're not available, we'll fall over to the closest cluster based on RTT. And so super clusters are again, are clusters of clusters. So you have US West, EUS, US East type stuff. Leaf nodes, leaf nodes are our way of saying, we think most people really want to have a natural transition between, I have a silo technology and I have a true shared utility technology. A lot of times organizations internally feel like they can have one or the other and they're almost forced into that shared utility model. And to be honest with you, at least the users and customers that we've dealt with, that usually doesn't go well. A lot of groups resist that. They don't want to be forced into using this shared utility. They, for lack of a better word, the old server hubbers, right? They want their own servers to actually manage. And so leaf nodes is our answer to that. It allows you to have your cake and eat it too. What it means is that you can use leaf nodes to extend super clusters with hub and spoke topologies. You're allowed to bridge separate security domains. So if you're doing an IoT deployment, that can have its own security domain, however you want to do that. And that can bridge transparently and securely into a super cluster that maybe is the backend services and microservices that you're actually trying to deploy. It's very ideal for edge computing IoT hubs, data centers, we've seen lots of POCs, love the fact that there is a shared utility and there's services that are being built into that super cluster that can be imported, right? Let's say a key value service or a encryption service or digital signature service. But we can also run our own servers that we have extremely low latency. I mean, under a single microsecond sometimes, depending on your network, for a lot of local communication between what we care about most deeply. But the system again, because of the way it's built, transparently, securely and dynamically will understand if the message needs to be routed to the super cluster or if messages that are originating in the super cluster need to be routed out to our own client applications. And so this leaf node technology is very powerful and essentially allows you to have your cake and eat it too. You can have your own technology, your own silo for your own org, your own microservice, but your company or organization can be going down the path of a true shared utility within the company. So this is what kind of that would look. It can be a single server or a small cluster of servers. And again, all of this works totally transparently. I use it actually quite a lot to have a server running on my computer that has no security except for the fact that it's only running on 127.0 or one. I have to have physical access to my machine to be able to use it, but the tools and utilities I use are instantaneous. And if need be, they will immediately go into a super cluster that we run. So we talked a little bit about the fact that these services, these Q-subscribers can are now transparently geo-aware. And so if you have a setup that's a super cluster like this and you have the primary and the backup, and all of a sudden there's an outage, something goes down right in terms of the flow, the system will automatically reroute things for you on the fly, no configuration changes, no client application changes, nothing needs to happen whatsoever. And what we've seen is this extremely powerful. And it's not only that, if there's an outage, this works, but if you're noticing and you're observing the system where you realize requests are coming from Asia and your services in the West Coast of the US, you can actually move responders or start adding responders in Asia and the system automatically responds. Again, no configuration changes, you just deploy these service responders in Asia, and the system automatically will route requests from Asia directly to those. No load balancers, no configuring stuff on cloud providers, none of that is needed. So security is really important, it has to be secured by default, but a lot of times people's shoulders drop, they shrug, they go, oh man, this is just gonna be painful. And we realize that, right? So we really wanted to take the notion of secure by default to be easy and decentralized, meaning there is no central point of control. We have full TLS support with certificates and bi-directional both client and server. We can support DN or SAN inside of our certificate for the NETS user identity, support for normal username, password. And again, leaf nodes allow you to bridge different security domains as needed. And of course you have permissions restrictions on what subject you can send on, receive on, how big of message payloads, all of this is possible. And with NETS 2.0 and operator mode, all of this is decentralized. There is no centralized control for how I would introduce a client that would be Colin, that can only send up to 256 bytes only on this subject. And so this notion of least privilege is very easy within NETS, whereas normally it's very, very hard and very scary to do. Generally the way operator mode works is it's similar to the way the web works, right? There's three main components. There's an operator, and then an operator will vouch for accounts and accounts vouch for users. And applications use user credentials essentially to connect. The system does not understand anything on boot up, so to speak, except the trusted list of operators. It can figure out everything else from there. The system, and this is important, the NETS 2.0 systems never have any private keys or passwords whatsoever. They're never even transmitted to the system. So what happens is that underneath the covers that that system will challenge a client application with a nonce that needs to be signed by the private key of the owner of the client credentials. And then that has to be valid, not expired, has to be signed by a valid account. And again, this system can dynamically figure this all out on the fly. Therefore account owners can be free to do anything they want with users in terms of adding them, revoking them, changing their permissions without ever having to talk to an operator or any type of central authority whatsoever. And NETS 2.0 is designed specifically to be a federated model, meaning it can have multiple operators, right? It could have servers that are being operated by Google and by Amazon and by Azure. Yet as a user, they all look the same. I can present my credentials to any server in the network and it'll work. Little bit of detail and we'll share these slides after the webinar for sure. But we use PKI, we use encoded ED 25519 keys which at the time we were making these decisions were the most branch resistant. So for Spectre and Meltdown type stuff, we can use other ones. Right now we only support that, which allows us to make the JWTs public and secure. And then we actually do the signatures with only ED 25519 keys. And again, the three actors essentially in the system are operators, accounts, and users. And I encourage you to go to our new documentation if you wanna learn more or ask questions or ping me directly on the Slack channel or info at NETS.io. These are kind of what these things look like. And just here on the account, these are some of the things that you can control. The max number of connections globally, max number of leaf node connections, the max data bandwidth essentially, the exports and imports, things like that are all at your control, totally decentralized and instantaneous. So for example, if globally someone's trying to connect in Asia and the account already has 10 active connections and this would be the 11th, it's denied instantaneously. And the system, again, the way the servers talk to themselves in a secure manner, keep all of this state on a global scale. Any changes by the way to an account are also instantaneous. So if Colin revoked me as a user, my application would instantaneously be disconnected, essentially speed of light within 10 milliseconds globally. Documentation for NETS.io, documentation is hugely important for modern systems. People can self-learn, can ask and answer their own questions. We encourage you to look at them, to help us improve them. We put a lot of effort into it, but it's an ongoing effort that we work on every week. But the ecosystem has also embraced this and started to really put effort into helping us maintain a world-class set of documentation, which again, I think is critical for any type of ecosystem. Our ecosystem is great and growing, as you can see. We're probably about 85 million combined downloads having NETS and NETS streaming on Docker. This is kind of like benchmarks, you gotta take them for what they are, but we can see that we're going up into the right very quickly. And we see also that with the number of Slack users, the number of interactions on Slack, the ecosystem is just very healthy and it's growing very quickly. And we're super excited about all of that. We encourage everyone to join in, everyone to ask questions. We're really excited about where the ecosystem is going. And at least from a roadmap perspective. So we're not sitting still, we've put a tremendous amount of effort into NETS 2.0 to get it out there to really, what we feel is push the boundary of the way modern distributed systems can be designed, can be architected. But we're not sitting on our laurels, we're pushing continuously. And so where we are now with NETS 2.0, again, that's the decentralized security multi-tenancy accounts, global superclusters and leaf nodes. We already have a transparent bidirectional bridges for Kafka and MQ series, which quite a bit of users in the ecosystem and customers of ours are starting to use. Again, the documentation updates are critical, I think for a healthy ecosystem. In 2019, we're now working on the successor to NET streaming, we call it Jetstream. We've been engaging with the ecosystem with design documents and asking for input and feedback. We're also doing things around observability, default observability built in, where you can export a service and say, hey, I just want to observe the service too and sample it at 50%. And every time you sample it, send the results to this subject within my account. And we feel it's a very, very powerful approach, a very simple approach, but it also works at a global scale, meaning anywhere in the world, the request can be coming from, anywhere in the world, the responder can be. And the system will present you with a metric that says, here's how long the service itself took, here's the responder RTT, the request RTT and the internal NETs latency. Again, let's say something like a super cluster. So we're super excited about that. As we move into the end of the year, we do plan that IoT is a massive opportunity for any type of technology that does try to connect everything. And so we believe NETs works amazingly well on very low end devices. However, we're going to introduce native NQTT support, both 3.1, 5.0 and the sensor network versions, such that these applications untouched can connect directly to a NET system. We're also going to introduce WebSocket support and edged edge, zero trust security, zero trust for us, it's a loaded word. But what it means is that before our system ever sees anything, everything's already encrypted and we don't have the keys. That's kind of our version of what that means. And then as we go forward, we really think there's a massive opportunity. Again, if everything's connected, these services are available, things can be imported, KV, databases, things like that, to also embrace where WebAssembly is going and use our security model and WebAssembly to say, here's a signed WebAssembly module that I want to work on ingress for the subject foo in my account. They can do things like identify personal information, GDPR type of things, filtering, teeing, anything you can think of. And again, with WebAssembly, almost any language you want to write it in, combined with our security model, we think is going to be extremely powerful to really extend the behavior of these, not only with applications and services and streams that can be imported, but also these WebAssemblies that actually embed themselves into the network at a global scale. Yeah. So, hopefully this was helpful. We really think there's an opportunity to think different, to stop making things more complicated. And again, regardless of what we do as an ecosystem, we know this is going to happen. Technology always is a pendulum. It's always swinging back and forth. I still feel we're still swinging towards making things more complicated than they need to be. I want to start trying to make things simpler, but not give up on the principles of what we feel modern distributed systems should do. The microservices, the easy to deploy, the observability, the debuggability, those things are good, they're meaningful. I just think we can do them in a simpler way. So we need what I consider our own Tesla moment, right? Where Tesla said, all of that amazing technology that's over a century old and building an internal combustion engine, we can just replace it by something as really simple and bulletproof. And it just does everything you want. You got a steering wheel, you got a gas pedal, so to speak, but the technology that powers it is radically simpler, radically easier to maintain, to operate, and cheaper. Thank you. I really appreciate the time. Here's how to get hold of us. Nat's I owe the website, github.com. I think we're up to like 65 different repos. The ecosystem again is very, very vibrant and Nat's I owe on Twitter. Not happy to take any questions. We have some time, we didn't have the, we have some time, perfect. We do, we have a couple of questions here in the Q&A. Great. So the first one, a problem we've had with Nat's and frankly some other messaging systems is a lack of message ordering guarantees or more simple message time stamping for messages sent by different producers to a given receiver. What are your suggestions for handling this? I think that's a great question. Ordering, there are cases where ordering does matter, right? And if you look at Nat's streaming, one of the first tenants of what Nat's streaming does and Jetstream that we're doing as a successor does is that once an order is defined, it's the exact same order for every subscriber in the system. And so my answer is, is that if it is important, you know, use Nat's streaming or look into our work on Jetstream and be part of that process to help us. But one of the first tenants is that there is a global ordering for a, we call them a message set now with Jetstream, but. Okay, great, thank you. And reminder to the attendees, if you have any questions, put them in the Q&A and we will go ahead and answer them. And by the way, Kim, one last thing. You know, Nat's streaming and Jetstream, you don't need to use it for everything. So for example, remember how we talked about system accounts and that our servers actually talk to each other now and they actually coordinate information. When they send a message, they send it with their ID, which is a unique ED25519 key that we talked about and a sequence number per event. And so any other remote server can instantaneously, no matter how many servers are in between them, realize if they missed something. So, you know, you don't have to go to a pre-canned solution all the time. If there's a situation where you say, hey, I wanna make sure I understand that, it can be as simple as saying, every time I send a message, I send a unique identifier and a sequence number that's always plus one. And then the other side simply has to store state on that unique identifier and the last sequence it received. And if it received something that's not plus one, it throws its hands up or does something to fix the problem. But, you know, we didn't use, let's say Nat's streaming internally to communicate between the servers. We use that fairly straightforward trick and anyone can use that. Okay, great. Another question from Antonio, is Nat's technology providing some schema registry like component to allow producers and consumers to sync on the messages schemas? So that's another great question. And I have a two-part answer. So philosophically, Nat was designed to be payload agnostic and we have maintained that throughout its lifetime. Now, from an application standpoint, you need to know what the message is, right? And so understanding what it's meant for, what encoding it's in, JSON or protobuf is important. And we want as part of the ecosystem to make that easier, right? We have some clients that do a little bit of that stuff but we also have, and one of the things on the roadmap was this data pipelining and we've been doing some internal work to make that easier. However, that payload agnostic, sometimes people kind of push back a little bit on that and that's totally fair. The metaphor though I'd like to draw is it's kind of like a phone. You know, my phone, I can speak English, Portuguese, Japanese, it doesn't matter, right? It's payload agnostic. You would hate to buy a phone that only understood English. And so we feel philosophically the coordinates system needs to be payload agnostic. But we also recognize as an ecosystem we need to make it easier for our users to be able to say, I wanna use things like GRPC have, like with protocol buffers and service endpoint definitions and things like that. And so the team is putting a lot of effort in there and if you have suggestions, we encourage you to send that feedback to us or submit a PR. But we are deeply interested in making that part of the ecosystem but the core NAS will remain payload agnostic. And for the reason of your phone doesn't just speak English. Okay, another question. So is NAS a way to approach the classical service oriented in API gateway architecture? Would you advocate using NAS as the communications layer for a user subscription site for analytical data? We do, but we're biased, but we do. But where it becomes very interesting is that if you understand what the request is and what the response is, right? You can layer that over top of NAS very quickly. I mean, it's a lot of times it can be as easy as a 10 minute exercise and we've actually done it in less with some of the team members at user sites, right? To say, oh, we'll just switch it over to NAS. But that's not the most powerful part. The most powerful part is that now you have a totally NN secure system that you can lock down any way you want without ever changing the line of code. You can scale up the service endpoints without adding load balancers, proxy side cards. You can scale it back down. You can put the requesters anywhere in the world. And so what you get in that 10 minute exercise is you get a truly global scalable system that is also observable. Now, you can observe it today. It's a pure PubSub system, but we're making that even easier. And that code has already landed. So I encourage you to reach out to me. We'll talk one-on-one about where we're going with that to say, oh, and oh, by the way, this exported service is now instantly observable anywhere in the world, but only for secure credential holders, right? I can see all of the latency. And again, not just the service or what I thought the requesters view of the world is, right? Because when you're observing, it's actually, what does the service provider see? What does the requester see? And what does the actual NAT subsystem see? And anyone who's observing wants to see all three. And we don't, you know, we haven't seen that yet. And our system does that. So absolutely, and please reach out to me directly or the team. We'd love to talk more one-on-one with you. Okay. Here's a good one. Where or why do you think we should not use NATs? Well, so, yeah, I could be fickle. It's like, well, if you want to spend more money, more time, you know, all that other stuff. But there is a legitimate answer, right? And we've actually used this quite a bit with people coming to us going, hey, we really want to move faster, right? And we really want to figure out a way to manage our costs on one of these systems once it's set up. And so in that case, what we say, 100% of the time is, don't replace what you have. Don't change it. Because if you all of a sudden go to Rip and replace everything that you've built, whether it's HTTP and REST or GRPC or Kafka or WERDA or MQ series or another legacy messaging system, that a lot of our team probably built at least from the tipco days, we want to protect that investment. So in that case, we would say, don't use NATs for that. Use NATs to extend and do new greenfield type of implementations. Okay. Is there any work on a horizontally scalable unordered task queue solution such as SQS? Yes, that's part of the JetStream initiative. And again, there is actually a JetStream channel on the Slack channel that is sharing all of the design documents. And you can see a lot of the conversation between the people in the NATs ecosystem that care about this. And it is part of that subsystem is to include that functionality. And for those who don't know what JetStream is, it's a successor to NAT streaming, but the take home points are that it's not only streaming, it can do work queues, it can do normal messaging type stuff. It'll be built into the NAT server, it'll be horizontally scalable, and it'll allow different types of what we call observable patterns. Again, to exactly what you just asked for in terms of the SQS functionality. Okay. Great, the Q and A queue is empty. Do we have any other questions for Derek? We'll give it a couple of seconds here. Okay, we have no more coming in. So, oh, wait, we do. Matt says, we are currently using Stan. Will there be significant changes to replace JetStream? So, that's a great question. We don't necessarily plan on it and we want to make it as easy as possible. So we're gonna put effort into that, but after JetStream is actually designed. So what I mean by that is we're not trying to design JetStream as Stan V2, because we felt like getting closer to the native NATs API, understanding wildcards, different types of patterns like that were critically important, bundling it inside of the server and set up having Stan kind of run a server embedded. But at the end of the day, we try very hard to protect the ecosystem and protect our users. Nobody's perfect and we're not either, but as you can tell, if you take a eight year old NATs application, you can still run it today. We at least try. And so we are gonna do that with Stan and for people who want to move from Stan to JetStream, make that as easy as possible. Okay, any other questions? I guess we have to give people time to type. Well, while they're typing, thank you everyone who was on. I really do appreciate the time. It's a fascinating time to be alive from a technology standpoint. So, we're just excited to be kind of part of it and see how all of this unfolds, it's amazing. Yep, yes it is. And somebody asked if they would get the presentation and yes, they will all be uploaded to the CNCF website sometime this afternoon. So, thank you, Derek. With that, I'm gonna pause the recording. Thank you so much for this great presentation. I'm gonna pause this and if you go to my...