 As people join, they can catch up with the recording. So I'd like to thank everybody who's here today. This is a CNCF webinar, and we're gonna be talking about the need for Kubernetes native message cube broker, cube MQ. Now I'm Alex Ellis. I'm the founder of OpenFaz and in that's the cloud native tunnel. And as a CNCF ambassador, I'm gonna be moderating this call and handling any questions that you might have as we hear the presentation. So we'd all like to welcome our presenter today, Leo Nabat, CTO at Cube MQ. So welcome, Leo. Thank you very much. I hope you all of us hear me well. And good morning, good evening. We do have a few housekeeping items, and then we'll hand it back over. Basically, this is not your regular Zoom call. This is a webinar. And so as an attendee, you don't get to talk and be on the video, but you can drop your messages into the Q&A box and you should see that at the bottom right. Now we also have chat. And so what we'll do is as Leo presents, is I'll occasionally interrupt and ask some of the questions as we're going through. We might not be able to answer all the questions, but we'll do our best and anything else we'll follow up with at the end. Now this is actually an official CNCF webinar, which means that we're subject to the Code of Conduct. So please just don't say anything that you wouldn't say in public that might be in violation of the Code of Conduct and that we just need to be respectful of the participants and the presenters. And with that, I'm gonna hand it back over to Leigh-Anne. We're gonna kick off the presentation, the need for Kubernetes native message Q-Broker. So thank you. Thank you Alex again. Good morning, good evening. Depend where are you all. Thank you very much for this opportunity. I will try to, in the next hour, to present to you a solution of Kubernetes native message Q-Broker. What we're going to do, I'm not going to bomb you with many slides and theoretical information. I'm going to go over of the need some use cases, some architecture examples. And also we'd like to share with you some two demos that actually in real life show typical usage of message Q inside the Kubernetes. So I will start and again, if there's some questions so Alex can help me here and we'll facilitate them through the presentation. Okay, so let's start. In some broad and some kind of a basic understanding, when we move into Kubernetes and starting to build microservices, most of the solutions that you have and service to service point to point connectivity means that you can use rest interface or GIPC or even use some kind of service mesh that actually you deploy a data plan and actually you hardly in the hard way connect between services. And it gives some kind of complexity to your architecture. This means that in some point, for example, if you need to broadcast a messaging or to string or to use some kind of asynchronous messaging between two services, you need to handle it in the services business logic. And then when you're starting to decouple your microservices and again, when you're starting to use Kubernetes is very best tools of deployment replicas and doing some load balancing. It's starting to create some kind of challenges, how to really connect between services, what you're going to do with service discovery, how you can reach other services, what you are going to do with versioning. If you're changing, for example, your API definition between services. So to this challenge, and this is not, it's not new, a messaging broker or queue is one way to work with that actually that all the services know the message queue or the message broker address and now can communicate between them through the message queue broker and allow a lot of flexibility and endless possibilities for architecture point of view. So if you would like, if when you put inside your cluster a messaging broker, you're starting to gain some advantages compared to a different solution. For example, I would talk about how, when you're going to do, when you put some kind of message broker outside of your cluster, but when you put it inside, first you're starting to gain from the benefits of Kubernetes. If you, for example, using tracing metrics and it's embedded in your cluster, you can enjoy, for example, end-to-end tracing between services. Also from security point of view, everything is inside the cluster. Third, it's also the ability to replicate and build clusters and mini clusters, put them on the edge, for example, means that if your architecture is using some kind of messaging queue broker capabilities, you gain the Kubernetes benefits also with your architecture. So from IT perspective, it will be, for example, if you need to deploy the whole backend or to deploy some kind of architecture, when a native message queue, for example, it has an operator, you can build it through your pipeline, through your CI CD, you can deploy it very quick, you can scale, you can downscale, up and down, you can use whatever kind of controls that you want to use, and also it's unified the operation the workflow of audio deployment. Now, some of the solution today is trying, sorry, some of the solution today are not built into Kubernetes, if we talk about traditional one, like Kafka or Rapid or any kind of other, we call it legacy one. What we saw that many companies, many solution artists putting it outside of the cluster, once you're putting such a very important component outside of your cluster, you're actually starting to lose some of the benefit of Kubernetes and actually expose some challenges, some challenges like security, for example, if for example, you're using the rotation of your TLS certificate, it's almost impossible to do it with some kind of entity that actually it's outside of the cluster, and actually it's you open and expose your security domains outside of the cluster, you double the traffic, you need an additional environment maintenance, it's sometimes some kind of additional over that you need. Of course, you can use and you can deploy on Kafka or Rabbit or something like that inside your cluster, but think about, for example, putting a cluster on the edge, okay? On the edge, you have limited resources, you have some other challenges that actually a big solution like Kafka that you need five, six nodes, you know, only to support very simple one, and it's not so tightly integrated inside Kubernetes, so it's also kind of challenges to do. So the reason is more like more like very small solution that you can deploy anywhere on the edge or on the cluster, on your backend, and actually you can connect between them and do many, many solutions. I will, another option, another cool thing is that once you have a message broker inside your cluster, you can now start and use the message broker as a gateway to many other services that now you don't need to write interface to them. For example, if you have a cache like radius, I will show it in a minute or you have some kind of a log or for like elastic or you have a databases and you have something like that. If you have services that need now this kind of, this kind of, if you have some application that need this kind of services for them, instead to connect them directly or manage directly, you can use the Kubernetes message broker inside in order to route messages that connect between them with connectors or any kind of interface between them. I will give another very good example and I'm going to show it in a minute. For example, you have an API and you have an API that need to connect to a database. Typical solution is will have like a container or deployment of this API that will have the database condition inside that it connect directly to a database and this is potentially problem. Instead, if you can decouple them with the message broker, you can connect the API directly to the message broker and the message broker and another service will connect also to the message broker and this service will handle all the connectivity and all the requests for this database and then you can scale. For example, you have, if you have high traffic, you can do versioning, you can circle and rotate as security. So it's more in some point, more robust architecture. Now let's talk about five typical use cases, sorry, five typical use cases that is most of usage of a message broker inside Kubernetes. The first one is a multi-stage pipeline. Multi-stage pipeline, data processing pipeline is very common scenario that you have some kind of pipeline work of moving object of data that need to be processed by different processor. For example, if you need to queue some predefined work to do, you have the first stage of processor that take it from the queue, do some kind of processing and move it to another stage on the queue. For example, in security system, you have multiple cleaners, for example, that need to clean the data as a pipeline. So using the multi-stage data processing pipeline with processors is a typical usage of a message queue and here is the queue because mainly we're going to talk about patterns of messaging and the patterns of messaging that we're going to talk about them, our queues, our PCs, our streaming, real-time, non-real-time and all the application around that. One important thing to see is also the ability to have like some kind of a dead letter message you if you're familiar with Amazon SQL or something like that, you are able to, for example, if you fail any stage to process some kind of message, it will, a message you will fail over to a different queue that another special process will be able to take it and process it or throw it to the garbage in order not to bulk the queue and actually stop all the processing. So this is a very typical scenario. Another one is very familiar, actually most of the usage of queuing is like job and task distributed queue that you have many, many producers that actually sending tasks in job and going through and sending to a messaging queue and on the other hand, some workers are taking this kind of information from the queue and processing it as different from the previous use case, the previous use case is used like a serial path of moving object between a processor here. There's no meaning to the time or the synchronization between them. So actually there's a queue that actually distributing work between them very similar to many, many other solutions, but you have it inside in your cluster. So Leo, we have a couple of questions. Would you say here that you could, for instance, limit the level of parallelism or concurrency between those workers for each queue? Yes, of course, you can limit, for example, you can limit the amount of, for your particular workers can be any amount of subscriber that can be in parallel or one, you can do, for example, you can do some kind of multicasting of messages between them. So you have a very good control or how you can throttle the messaging between them and also you can control what happened. For example, if you need deletion, expiration, delay of messages, transaction, for example, what happened if you're getting a message and you cannot handle it, you can return to the queue and not containing to do something like that. So yes, you have very good control of this. So the other question is here is really, there are open source projects that are well integrated into Kubernetes like RabbitMQ and Kafka. So we're being asked, what's the real difference here and why would you use maybe CubeMQ instead of those solutions? Okay, couple of them. First, both of them Kafka, Kafka is more traditional for streaming type of messaging application. Rabbit is more extensive one, but both of them are not designed to run from the beginning in Kubernetes. First, the need for a lot of resources. Second, CubeMQ has some kind of unique features that others don't have. And like a GLPC interface, multicasting and queuing authorization, authentication, pre-built in routing. Many, of course, all the, at least one, exactly one, at most one, all this messaging matters, support metrics from materials, everything is built in. And you don't need to write almost any line of course you don't need to build another business logic on top of it. And it's very small, about 30 megs of container and it can deploy anywhere. And has his own written in go, he has his own capabilities. So you can use Kafka and Rabbit, but you need also a lot of knowledge from DevOps capabilities. CubeMQ can run as an operator. You don't even need to configure it. Only one simple example is there's no configuration in CubeMQ means that you don't need to define no queues, no exchanges, zero configuration means that you up and it's running. And when you send a message, you open a key, open a topic if you're receiving, you're doing everything it's for you and it's very flexible. This is very informative answers there. We've got a couple more questions but I want to give you a chance to do your presentation. So I'll find another opportunity. Okay, no problem. So this is the CubeMQ. Another one is the stream messaging processes, very similar, very, for example, in ADA type of applications such that you need to process a lot of messages and starting to route them to a different service like pipeline, data stores, machine learning. Another very quite interesting part is very well, is the ability to do app sub in brief time messaging is fast that you, for example, you need to distribute a lot of data. Also in and out, fan in and fan out also that you can actually distribute a lot of data. And the most I think very common one is, for example, this one and this is what we call the application decouple and microservices. And this means that you can now have the message queue as a message broker that handling all the small pieces and all the services and all the connectivity between all the services. And this brings me to actually show you a real example, real life example of such an application. So maybe I will stop here and you can ask you some question and then we can go. Yeah, sure. I mean, if you've got a demo to get ready, you could start with that. So this is kind of going though of this slide. What would happen if you had a Kubernetes cluster on premises and a Kubernetes cluster on Amazon and could QBEM queue somehow federate between the two of them? That's one of the questions. Yes. And how? How they can connect between them. You can define some kind of what you call the gateway and then you can connect between them directly. Yeah. Okay. We have some kind of similar demo or something like that. And we're going to show some kind of how we migrate how we're going to migrate an old system to a new system with some kind of bridging. We'll show it later on. Now, the other one just maybe as you sort of click through and get your demo ready is, all of these sort of solutions do need storage because if a pod crashes, we lose the data. So how are you handling that? Are you using persistent storage? Do you need a volume for it? Yeah. Actually, it's a great question because we found out in the beginning that first of all, yes, you decide how much volume you want and it will create a PVC for you. Persistent volume claim. This is a stateful set. Means that it's maintained some consistency between all the poles. What we saw over more than two years of operation of Cuban Bureau is that more and more user are not using the persistency. Because it's starting to move to other messaging patterns that don't need persistency. Means that if you must have persistency because you don't need to, you're not willing to lose one message. And when I said not willing to use one message, it means that when all your cluster is down, all the message queue is down, all the nodes are down. Means that if you have size of three or five, even without persistent volume defined, you actually have all the messages. Means that you're not losing it. Only if you're going to wipe out your cluster without persistent and something really bad happening, you're going to lose if you're not going to use persistent volume. But if it's important, yes, when you define your cluster, you can set any amount of volume you need. It will take the claim and create it for you. Okay. Is there anywhere we could find out a bit more about that? Maybe at the end? I don't know if Oz has a link. He could maybe send in the chat later just to sort of help people look through that. Sounds interesting. Okay. Thanks for that. Okay. So let's have some kind of a typical application. What we call it a user domain. User domain is some kind of a solution or what we call it micro server architecture for the user domain. It means that it's like an API and a web interface that I would like to create a user, get the verification code. I would like to do some kind of verification. I would like to do logging and log out, very small, typical application of how you manage users or something like that. So I will show you some kind of architecture that we quickly build and show the capabilities of what I'm talking about. What we have here is services. And the services, we have a kind of web API and we have some kind of a web server. I'm going to show you in a minute how it works. This is the web server will serve them the full. And it will work with the web API that from one hand will get the request and will communicate all the messages over the Cuban queue. And we have some kind of other services that connected to some other services. We have a cache for Redis and I will show you in a minute some how it works. Also, we have a PostgreSQL. This will be our user's database. And we have also Elasticsearch that actually will do some kind of like an auditing and log all the messages that happening in this architecture, very simple. And also I'm going to show you how it looks like inside of the messaging broker and the capabilities that you can see with it. So the first thing I'm going to do is do a log in. And the log in will involve some kind of going to directly to the database and do some kind of what we call a query like an RPC. It will send a query to the database and ask if this user exists. If not, he will create it and send it back for verification. Then he will send you some kind of a token and then you need to do some kind of verification. Means that again, he will ask and send some kind of a command to the database and do verification. Then we're going to do a log in. Here, it will be much more interesting. Here, first of all, he will see in the cache if this user has already logged in. If yes, he will actually take and do all the work from the cache. If not, he will work from the database. Now, instead he will, he don't have it in the cache. He will go to the database, get confirmation of login and then update the cache for the connectivity information and we'll go up. Then if you are going to do log out, what he will do, he will mark that the users log out and also clean the cache. So it means that the next time he will do log in, he will need to do again to full login. And again, all the message you will be logged out automatically to the Elasticsearch, okay? So give me one second. I'm going to share the screen. I hope you see this one, okay? You see this? Okay. So here I'm going to put some kind of some new user. Give some kind of password, some kind of email, some email, email.com and then I will do some kind of register. What we're going to see here, we're going to see some kind of messaging that he was successfully get the token verification. I will do again, verification, verify. I can get the log in, do log in. Now we go again to the database and I will show them in how in real time it's happened. And then we can do a log out. If for example, before you do log out, they can do very quickly again, the log in, sorry, yeah. And it would be much, much faster than because it's going now to the cache and I can do now a log out and log in again. More or less this is the flow. For example, we can see some kind of errors that happen. If I do a log out to something that I'm already logged out, I will be already logged out. If I'm going to do some kind of better verification and we have better verification, if I have for example, some better registration already exist. So this is like a simple, typical application. And now what I'm going to do, I'm going to switch to some kind of a nice thing that you're going to show you. A little bit the things that running inside. What, I hope you see this one there. Yeah, it looks good. I went one second. Yeah, let's get that screen back. Yeah. Okay, the QBQ has a very neat CLI that actually easy to use and actually can create and do a lot of nice things. What is nice about this, you can actually do attached to a specific pattern and channel and actually can monitor what happened inside. So what I'm going to do here, I'm going to do the same flow with different information and I will show you exactly what's going on in each channel. For the first one, I'm going to shoot the monitoring for the elastic cell to go to the history one. Now it's connected. Now I'm going to go into the user's channel. This is will be with the database and the third one I'm going to show you for the cache. This is going to the cache. What it's doing now, it's actually connected the cluster and actually do a monitor like attaching to a specific information. In the different in my users, in the words I did, I will do now a new user. Okay. One of the other questions as you're just seeing your demo is, is a Kubernetes job a current valid trigger or input for this? Again, I guess that. Could we trigger a queue from a Kubernetes job? Is that current use case? You can, no problem. You can create some kind of, Kubernetes has a resting interface. Yeah. You can have a like a webhook that you can send specifically to a queue or event or something like that. Has a streaming. You can stream up, you can stream down. This is also in WebSocket, resting interface WebSocket. So I mean, like if I wanted to do that, I guess I have to write a console application, package it in a Docker image and then create the job. The job will connect to QBEM queue. Yeah, we can. Yeah. Okay. So hopefully that's answered the question, Gilliam. Also, do you have any kind of, so this is almost like a tracing that you're showing us with your CLI, but is there any integration with open consensus or any other projects like that? Yeah. It's a building or all the, all the spans that you're sending on GLPC and the rest are actually going through the messaging. Means that actually you have end to end span. It's integrated with open sensors integration. Great. Thank you. So what we saw here, it's a lot of messages, but what we can see is that the message, for example, going through the Postgres one executed, this is the query and I get through. And here in the outside we can see, again, this is the base 64 because we actually we see the data on side of the message broker. But let's continue and again, but what I'm going to show, what I want to see is what's happening with the cache. Okay. What happened is that you can see that he tried to, when I did the login, he tried to find, okay, do I have this user? If not, I get a message. And this is a key, one of the key nice things about it is that you can send a query or send a command and then you can set some kind of a timeout or, and then when you don't get information, you can get an error up and you can continue and work on it. And if I have it, for example, I have this information again, so I get it back already. I have a request and I get back response and again, like a log out. I can have more and then send back information. So this is a typical application of multi-service, how you can use Kubernetes message broker, QBemq with the events, query commands capabilities. So maybe before I'm going to the next use case and also some kind of a demo, very interesting one. If you have more questions, I can get. Yeah, so my question as a developer is, where can I see this sample code? Of this one? Yeah. Well, we can upload it to, we can upload some kind of repository. Yeah, I mean, if you have a Twitter account for QBemq, maybe you could send it out and folks could follow you. I'll find the link for QBemq and put it into the chat. Somebody is asking about, what is the sort of maximum throughput? Has this been like compared to other products and what kind of the word is RMQ? What are you getting in terms of requests per second? First of all, this is written in Go and getting all the benefits of Go, means that it's very small and we manage, I will give some kind of example, maybe it's very related to the next demo because this is one example. We have installation of QBemq in financial services that they're shooting billions of messages per hour because you need to push a lot of that of quotes and data. In our tests, you can get with the, of course, with the proper hardware and memory and the memory is footprint is very small. You can get eight, 10 million message per second. Again, it depends on your hardware specification, but it's not proper to get very high throughput. And again, it depends on your pattern. It's that if your patterns is with persistency, you actually bound, actually the button neck is the network and all the what's happening with all the persistency. And also it's also how you create your cluster, how many replicas you have in the cluster is using rough protocol. It does sound like there's some variables there. We've got around 20 minutes left to use how you see fit and people can continue to ask questions as we go along. Okay, so the next thing I'm going to talk about is I'm going to switch to Newshare. It's some kind of what we call migration. Most, we saw also with many companies that we work is they have an on-prem installation of old systems like MSMQ, for example, in .NET, or if you have also even Kafka or something like that. And it's, you would like to start moving your infrastructure to Kubernetes. Once you, for example, if you're going to use, and this is some kind of real case that we had, is we wanted to move from MSMQ base system that doing financial trading that run on-prem. And when we wanted to move to Kubernetes, there is no MSMQ capabilities inside Kubernetes. It means that we need some kind of solution. And what we are doing here is actually doing something called bridging. Means that we put some kind of bridge on the old on-prem side and I'm going to show you. And this bridge from one hand connect to the legacy system. If it can be even IBM MQ, if it can be MSMQ, it can be any kind of legacy one. And now you can start relaying messages, actually like a clients to this one and then move and connect to a QBQ in Kubernetes and then you can send and starting to migrate services to Kubernetes. And what I'm going to show you is this one. This one is a demo that we did to Microsoft Azure team that we showed in the capability how to migrate full.net base architecture to Kubernetes. And that's actually used MSMQ. And what we have here is some kind of financial data application that has, for one has some kind of generator of quotes, have some kind of command that's sending to MSMQ. And then this is the old one. And also you have some kind of API service and you have client. But once you want to move to Kubernetes, what we have done and show them is one second is this one is the same architecture. Okay, but now we have some kind of a bridge that's connecting to QBMQ in AKS. And I will show you. And then you have a database for persistency and the API service actually migrate from the last one to the new one with the client. So I'm going to show one second. Okay, this is, with a second, you see the screen? Yeah, we can see that it looks very animated. Oh, and now it's calling in. Oh, one second. I will make sure that this is the screen. Yes. Okay, once you see, what we see here is like a very small example of quotes. You have some like a foreign exchange client that's sending, getting quotes in high throughput from such architecture. So what happened is that this is the front end. Okay, now I'm going to maybe going to see and I can switch to this one. This is the legacy one. You see the Microsoft servers 2012. What we see here, we have some kind of generator of data in the purple, not purple, like a green. Yeah, we see that. Yeah, and we have some kind of message worker. Actually, this is like a bridge connecting to connecting to a cube and put it sitting on AKS. And in the AKS, in AKS we have a cube and cube cluster with the service that actually you can present this. Now, what I think about it is that it's not only streaming of data, you can actually stop. It's like a command that I'm sending to the old on-prem and can resume this information. So it's like showing here some kind of migration from old and legacy to a new one. So you're simulating this data and you can- Yeah, yeah, yeah, we're simulating this data. Here's the simulation from the data that you're doing. And here is the blue one is actually the bridge that actually, this is the green one sending to MSMQ. The bridge is taking the data for MSMQ and relate to cube and cube that's sitting on AKS. And then this is what you see here, connecting to it directly. Do you, is somebody asking whether you've got any, whether you've got any more information? I don't know if Oz has a link he could add. Who is Leo's co-worker around distributed transaction support. So not just tracing, but when you open a transaction across the whole queue, you might have multiple parts. You have the ability to have transactional Q message, meaning that when you receive a message, it's very similar to Amazon XQS. You can get a message from a queue and hold it for a specific time that you want. And then you can acknowledge, reject, rerouted, or if it's not processed correctly, you can throw it or you can send it to some kind of data queue and then kill the chain of the transaction. Now, there is a ability to do some kind what we call it a chain of transactional means that you can, what you can do is you can have one, you can have, for example, the first processor that taking the first message, you can hold it, send it to another one, another one, another one, and then you can have, for example, if someone is failing, you can actually send back a notification and everything will be canceled. So this is also a possible architecture and very easy to implement. We have done it a couple of times and show it to other customers how to do it very easily. Great, thank you. Now, do we also have any kind of integration with our back or the Kubernetes certificate authority? You mentioned that earlier about rotating certificates. Yeah, yes. First of all, we're going to release I think this week or next week an operator that will be much, much easier and actually be able to facilitate all of this. So, yes. I think we're really looking for a bit more detail on that, but, Kev, I guess that's what we have for you right now. What's the strategy around HA, so like when we come to disaster recovery, things just have completely lost. What can we do? If you have a persistent volume, it's going up and taking from the persistent volume and rebuild the logs and what you need to do. It's based on rough protocol. If you don't have persistent volume, you're losing it, but it's a stateful set. So, you're gaining all the benefit of stateful. It's have no dependency. You don't need, for example, you don't need ZooKeeper, you don't need other dependencies that you need to install before. It's one container per each node, three, five, and that's it. Great. I assume that this is integrated, could be integrated to any Kubernetes service, someone was asking specifically about Amazon EKS. It looks to me like this would just work the same on any distribution. Yeah, anywhere in the cloud, on-prem, anything that actually support Kubernetes. It can run anywhere. Did you cover how to install yet? How's the best way to sort of install this? Okay, you have a couple of options. Helm, soon to be a Pareto in the next week. And also you can use the CLI. And then you can, in very quick way, you can install and manage with the CLI, all the needs that you have in mind. You can install, update. You can, the nice thing about the CLI that you can actually work with it and develop with it means that you don't need, you can send messages, you can see what's going on between them. And if you want, we have all the details. I can even show how very quickly you can integrate, showing work and develop with it. One example is when you have a cluster, a remote cluster, and you would like to connect and work with this, you typically need to do both forwarding, there's a lot of hassle. The KubeMQ CTL, this is the combine line, the CLI, you can do very quickly something called cluster proxy. And actually what you're going to do is going to automatically put forward all the ports to your localhost and you can actually develop and work as is in your localhost. And very easy to do, send messages, simulate all the queuing functions, everything you can do with the CLI. Thank you for that. Sorry for the feedback from my keyboard there, answering questions. So I think we've had a lot of questions actually. We've answered well over 14 on the Q&A and then a few more. Is there anything else you'd like to show us in the last sort of couple of minutes that you've got? Maybe, yes sir, maybe we can... If we finish the slides and the contact details and if somebody wants to reach out to you and find out more. Yeah, one second, I will show it. Yeah, Leo, I've seen a question about the multiclustering in Kubernetes for KubeMQ, so maybe you can address that. Yeah, you can connect between clusters. What we have done is something like, there's another component called Gateway that you can install it and now each cluster connect to this, connect to this Gateway and message between clusters. This is one option. If there's another option for hard connectivity, you can actually build some kind of very small connector that will have GRPC connectivity between clusters. But again, it's one of the things that we are really like to do in corporate is to hear feedback and things that we need maybe to add. One of the examples is, for example, authorization. This is very unique feature of KubeMQ. You can upload some kind of authorization file that will can pair specific for resource, like in access layer, access control is that you can allow service per pattern, per channel, per specifically to allow or not not to allow access to the message program together with JWT token authentication and also some kind of multiclustering build. For example, you can send a message to event, the same message that you can send to event and then multiple to also to a queue or different queues in the same message. You don't need to send couple of them to different services. You can need one message. You can say, okay, please send it as event to this channel. Please also log it in a queue to this channel and also send it to some kind of events to this channel and also replicate it to this channel in one message. This is another feature of KubeMQ that you can use. Thank you for that explanation. Is there any final word or is there sort of anything you want to tell us to sort of sum up? Just to give you some kind of a KubeMQ currently is a closed source project. It will be open source soon. We're going to open source what we call the community side of it of the KubeMQ. It's free and will be always be free. And currently even today, it's free. You can use it. You can download it. You can start very quickly. We take you five seconds to install it. And you can use the link on our website, the quick start. You can see very quickly how to use it. So from licensing point of view, it's free to use to you as much as you want. There is an enterprise version that you have additional features that you can use. And in the enterprise, it's also already open source. You can have and see and get the code. More or less than that, it's play with this. This is already deployed and production ready for almost two years, runs already in many clusters, mainly in the financial applications. We'll be very happy to hear from you. If you need support, we have also a Slack channel. Thank you very much. Yeah, well, thank you. And thank you Oz for joining. And I guess we'll wrap things up there. Thank you everybody for joining. Please keep an eye out for the next webinar. It'd be great to have you back again. Thank you very much. Bye-bye.