 Hello. Thank you for joining me for this webinar on how to enable powerful connectivity between edge sources and Kubernetes backend. My name is Leon Abad. I'm the co-founder and the CEO of KubeMQ. While we are here, as many of you already know that Kubernetes has become the de facto standard for deploying container-based workloads, it's so popular and fully featured that Kubernetes has evolved to allow clusters to exist on across multiple clouds and even to run on edge computing devices, even small one. But Kubernetes isn't inherently aware of location. It's like an obstruction layer on top of some kind of computing resources or some kind of architecture of distribution. Hence, the complexity of building a connected, stable, reliable data transfer from edge to backend become a very significant challenge for most of the development teams. What I mean is that sometimes when you develop a system on your backend with a lot of powerful resources like network and CPU and memory, this is not the case once you need to connect to some kind of edge devices that the resources characteristics are not the same as in your local network. One of the enablers to do such architecture is using a messaging platform that's sitting on top of Kubernetes is enabling to do communication and high-reality connectivity between all the components in your architecture. What are we going to learn? First of all, I'm going to introduce you the KubeMQ messaging framework. We will dive into each component. What is each component is doing and how they are interconnected between each of them to form a very high-scalable edge backend connectivity. Also, we will look on a use case, a live demo of how to move data between edge device to S3. And here what we're going to do is going to show how we're using all the KubeMQ messaging components to move files to sitting on remote location and copying to an S3 bucket. So, KubeMQ platform components has four main components. The first of all is the main component. This is the KubeMQ cluster. This is an enterprise-grade message broker and message queue. It's very scalable natively to Kubernetes. It's high available and very secure. On top of it, we have three components that together are forming the ecosystem of KubeMQ platform. The first one is KubeMQ targets is a container-based target, a connector that allows connecting to about 70, 75 different services from KubeMQ to them. Services like databases, cache, other messaging system, file system, storage, such things. The second component is KubeMQ sources. This is actually the other side of KubeMQ targets. This is allowed to ingest data into KubeMQ. We will use this as one of the examples in the use case. It's allowed to bring data into KubeMQ and then predict between other services or even use to different connectors like KubeMQ target. KubeMQ bridges is more the interconnectivity between KubeMQ clusters. It's enabled to transfer data between one KubeMQ cluster to another one or to replicate or to aggregate or to do any kind of transformation between KubeMQ cluster. It's also enabled to do cross clouds or cross availability zones or something like that, data between two or even more KubeMQ cluster. We'll start by a little bit discussed about what is KubeMQ method working, what is the main features and later on we'll see some features that will help us in our use case. First of all, KubeMQ is deployed with an operator for full-life cycle operations. This is very important because you want to have the ability to do a live roll-up of the upgrades as they do operations. So it's deployed with the operator. It's verified, it's written in Go, it's small, lightweight Docker container. It supports actually two main messaging families, the asynchronous one and the synchronous one. In the asynchronous one, we're talking about durability, FIFO-Q base is like send and forget type of messaging type. We have a public subscribe event. This is also asynchronous messaging platform and we have also PubSy with persistence is that instead of like an in-memory PubSy for an event or something like that, we have a public subscribe with some persistence and we call it event storm. And in the family of synchronous messaging, we have the RPC command inquiry messaging patterns. One of the main features of KubeMQ is the transport layer. KubeMQ supports GLPC and REST and WebSocket transport layer with TLS and support for both RPC and stream mode. This is very important. We discussed about why it's so important supporting streaming in Kubernetes messaging system, mainly when we're working with a very low bandwidth devices like in edge location. Also, KubeMQ support access control with authorization and authentication. We have multicasting and smart routing. We'll touch on it a little bit later. And one of the key features of KubeMQ is almost no need for messaging configuration needed. No need to set queues, exchanges, nothing. Actually, you are setting it up, sending a message, and that's it. KubeMQ has support for .NET, Java, Python, Go, and load SDK. This is sit on top of GLPC protobuf. And also, of course, we have REST interface for other framework that don't have the support for GLPC. Let's talk about the queue messaging pattern. The queue messaging pattern is very similar to if you're familiar with Amazon SQS. It's a FIFO-based order preserve message queue. Exactly one. Message delivery guarantee can set batch and receiving, has expiration level of messaging. You can set delay to processing messaging. We have dead-letter queues, long-pawning streaming of queues in, out. We have peak messaging. It means that you can look on the queue, see what kind of messaging you are waiting for processing, and then you can decide what you want to do. You can do, at all. You can do change message visibility, reject messages. You can specific act messages. You can resend messages to different queue. Also, pull and push modes depend on your architecture. In the events, PubSub messaging pattern, it's a real-time messaging pattern very, very fast. When I say very fast, we're talking about millions of millions of messaging per second. This is in memory. It has some consumer group support with a wildcard support. It has load balancing between numerous supports. Only once message delivery guarantee means that if you didn't consume, the message is lost. As I said, wildcard partitions, and it's not persistent. The event store, it's like the event store, the events messaging pattern. But it's now it's persistent. It means that every message is persisted to some kind of storage. In KubeMQ, you can define PVC, or you want to use the mFail file system that provides you the container. The same like the event in memory, support at least one message delivery guarantee means that you can replay the message if you want later on. Also, team support, you can connect and ask for messaging starting from the last message that was sent, or the first from actually the queue. Support message sequence, timestamp, time duration, you can play according to what you want to do. The RPC query and command message patterns is the synchronous part of KubeMQ connectivity mode. It's mainly for connection to a real time database if you have a command, or you have a query that you want to send a message to a database and return some kind of query back with the data. It supports two, actually two sub-message pattern. One we call command, the second we call query command. It's like a webbook you're sending a message to some kind of service, and you get a message back if it's work or not. If not, what is the error? The query is more sending to a database, getting back a data back. So together, this gave you full view of KubeMQ messaging capabilities. What is the advantage using KubeMQ over other solutions? First of all, it was designed and optimized to work on Kubernetes with seamless integration with other Kubernetes components. This means metrics. This means working with service mesh. This means that it can run everywhere. It can run on a cloud, it can run on-prem, it can run on edge device, it can run with in-cluster together, it can be run standalone, it can run on even a drone, supporting from ARM 764 to very high-end powerful workloads on very powerful CPUs and memory. It has all the messaging pattern, run anywhere, very low-record resource CPU. We have tested, the Docker code is about 40 meg, so it's very, very small. You can develop and build the complete architecture with KubeMQ. Other components like target, bridges, and sources, we will see them done in our demo and discuss it about it now. It's enterprise ready out of the box. No need for dedicated persistent volume. It's, again, depends on your usage of your messaging patterns. And I would like to say the importance of GRPC interface, its small performance, less latency. It's a unified API. And one of the biggest advantages that is that when you are using Kubernetes and using messaging, the opening and the close connection all the time is very highly cost. This means that using KubeMQ with streaming capabilities, you are open once, and you can stream data as much as you want. These give you low latency and also less, consume less of free social to handle such transfer data between endpoints. Okay, let's see the first connector of KubeMQ messaging framework. The first one is the KubeMQ targets. KubeMQ targets enable and allows to build a message-based microservice architecture on Kubernetes with minimal efforts and without developing connectivity interface between KubeMQ, message worker, and external systems such database, cache, messaging, and REST-based API. When you're building microservices based on messaging platform, you need the interconnectivity between other services. For example, you have an API and you have a database that need to get information from this database into API, or we want to save some data on cache, or we want to say we want to take the data and do some kind of queuing in order to process later. KubeMQ targets give you this ability to connect KubeMQ to other services. KubeMQ targets is an open source project that has sitium guitar and you can log in and see and what you can see is the wide area of support of services. We have about 80 different connectors for this container. We can see we have cache, Redis, Memcache, ResortCards, some tools, database like Postgres, MySQL, Mongo, Cassandra, and also we have messaging like Kafka and RabbitMQ and other MQTT. Also, we have a basic support for basic services in the cloud, like in GCP, we have all the cache there. In AWS, you can see a large service that we're supporting from stores, the database, all the messaging storage, and also in Azure. The next thing we're going to discuss is KubeMQ sources. KubeMQ sources is actually the other side of KubeMQ targets. It's allowed to ingest data inside, into KubeMQ and enable you to, for example, form some kind of ingesting component inside to your backend. It supports other messaging components like RabbitMQ, Kafka, MQTT, and also like a file storage and we see later on in our demo how it works. It's mainly working together with KubeMQ targets and also for, if you want to migrate in all services, like if you have services in RabbitMQ and want to move to Kubernetes and you want to have, to still have connectivity to RabbitMQ outside of Kubernetes, so you can use the sources and the targets that support RabbitMQ in order to form some kind of migration path for them. KubeMQ sources is also an open source project. You can look it into GitHub and here we have all the sources that we support. From HTTP, we have like an API gateway. This is a very interesting connector. You can put KubeMQ sources as an API gateway in front of your user and you can absorb data inside and you can place it on a queue or the process later. It's like an API gateway. For messaging capability, we have almost all the available one like Kafka and IBM MQ, ActiveMQ, storage for file system. Other as we have in the targets, we have also support for the web type of sources like in Amazon and Azure and GCP. To conclude, the ecosystem components of KubeMQ, I want to discuss about the bridge connector. The bridge connector allows you to connect between KubeMQ clusters. There are several connectivity and I will show you in a second in the GitHub repository, but the main idea is that the ability to interconnect between KubeMQ clusters no matter where they are. If it's on other regions, other availability zones, either in the cloud or in your on-prem, it depends on how you want to connect between them. Back to GitHub, KubeMQ bridge is also an open source project that we have about four topologies. One is the bridge is one-to-one connectivity between clusters means that I can connect from cluster A to cluster B. We have replication means that we have a way that data in one cluster can replicate the same data to different clusters. This is very appealing to if you have some kind of analytics or many, many forms of flow of streams of data that you want to consume in several Kubernetes and KubeMQ clusters. We have some aggregation means that you can collect a lot of data from many cluster and send it to another cluster and what we call a transform that it's like a mix between replication and aggregation together. This concludes the KubeMQ platform component description and in this point I would like to show you a use case of using KubeMQ and the component how to move data between Edge and S3 buckets in AWS. The use case that I'm going to show you is taken from one of our clients that is a multilateral technology company. He has a hundred of remote Edge locations that sending daily bases hundreds of gigabytes of files that need to be uploaded to S3 for some research. The research is done by other services that digesting this file and producing some outputs sending back to the clients with some information. They are currently using IBM MQ because they are on the VM type world and when they want to move they want they are currently moving to Kubernetes and they need a solution that will be container based and will be more robust and much faster than what they have today. They have also a cloud and on-prem solution that needs to be support which means that it's not only from Edge to AWS it's also for bridging between two locations of on-prem and a cloud. I'm going to divide the demo to four steps. Before going to start the steps I'm going to talk about briefly. We're going to use the KubeMQ Build and Deploy building tool. This is an online building tool that will help us to configure all the components and to deploy them very quickly to a Kubernetes cluster. So we're going to use this tool to follow configuration and for the steps I'm going to do like this. In step one we're going to deploy a KubeMQ cluster on a remote Kubernetes cluster on GCP Google Cloud Platform. We're going to add a KubeMQ targets that from one end would be connected to the local KubeMQ cluster and on the other side it would be send and save files on the S3 buckets on AWS. The second one, the second step would be deploying, we will create a Kubernetes cluster with K3D, Demon for K3S and then we're going to deploy a KubeMQ cluster on this Kubernetes cluster. We're going to add a bridge that connects between the local one KubeMQ cluster to the remote one that's sitting on GCP. The third step will be configuration of KubeMQ sources. This would be a standalone KubeMQ sources application that from one hand would be listening to the local local files that we will send, put there later on some files and on the other side we'll send the files that he is listening and take from this file folder to a queue in KubeMQ that will be later on, will be sent to the S3 on the remote side. Step four will be moving some files. So let's do it. Step one. In step one we're going to create a KubeMQ cluster on a remote Kubernetes cluster in this case we're going to deploy on GCP. For this I'm going to use our build and deploy tool as we can see here. This is the KubeMQ build and deploy management console. This is a web application. This application allows you to configure all the KubeMQ components and be able with KubeCTL command line to deploy YAML files into Kubernetes cluster and then you can create your architecture as you desire. So step one is creating KubeMQ cluster on GCP Kubernetes cluster and also adding KubeMQ targets that points to S3 buckets that will take files from a specific topic or channel. We're going to call it S3 and save it as a file on AWS S3. So let's start by creating KubeMQ cluster. Clicking on the cluster we're going to add and here we're going to create put KubeMQ with a demo, KubeMQ namespace demo. Very important is that we're going to expose the KubeMQ cluster outside in order that the bridge in step two and step three will be able to connect directly to the KubeMQ cluster that's running on GCP. We're going to do the GLPC interface with load balancer. That's it and do save and we're going to do a deploy. Here when I do click on deploy I can get the manifest and here I get two manifests that I can play with. One is the initialization of all KubeMQ, CRDs, definition, RBACs, everything and the second one is the YAML that is representing the KubeMQ cluster. So we're going to start with init. I can click it and go to the console, this one talking. So it's already, as I already did before it didn't change anything. Now I'm going to use this one, the click one and here we can see that it created KubeMQ cluster on the KubeMQ demo namespace. We can see what is the status, Kube, CTL, get pods, KubeMQ demo. See that everything is up and running and see also the operator. Another thing I want to do is to get the IP address of the load balancer that GCP created for me. So we're going to do with services, SVCs, demo and we can see that we need to take this IP address and save it for later use. Now the next step is to create the targets. So here what we're going to do, we're going to deploy KubeMQ targets container into the same namespace that's running KubeMQ cluster and here we're going to select a connector from S3 type. Then we can see that we have here and here we have two sides, what we call the sole side and the target side. The sole side is from where you're taking the information to be running on S3 target side. So from here we're going to select the queue and here we're going to select the GRPC service. Other here is KubeMQ cluster, GRPC KubeMQ but here is KubeMQ demo. This is the namespace and the channel is S3. Here I'm going to put the AWS keys and all the information and then do save. Here I'm getting the information of targets that of course I can add more targets if I want but here I can do the manifest but what I want that it will run on the namespace so I'm setting the namespace demo, getting the manifest the same again and here I'm going to copy paste and run created. We can see if I want the pods that the target is created. Another cool tool is KubeMQ CTL. This is the command line of KubeMQ. We can even look into the container with the logs, we can do KubeMQ CTL, get cons logs. You will select which one and we're going to use this one and you can see that it's already initialized. This concludes step one. In step two we're going to create a first a Kubernetes cluster on a local machine like an on edge device. We're going to use K3D as the Kubernetes distribution. This is a demon for K3S for Windows. Then we're going to deploy a KubeMQ cluster inside this Kubernetes cluster on the edge. Then we're going to create a bridge between the local KubeMQ cluster that's sitting on the edge side and pointed to the remote Kubernetes cluster that's sitting on GCP that we configure in step one. What we're going to do first we're going to create a cluster for the edge with K3D. Cluster create. We're going to create a cluster. Let's see that everything is up and running. See it's running. Let's see that everything is up and running from the pods. I did the mistake so let's do this one. Let's see that everything is running. Now we're going to run to again our build deploy tool. Here we're going to create a cluster. Here I can delete one and then I will create a simple cluster with the default KubeMQ namespace to save, deploy and here since we're starting fresh. I'm going to use the init and then we can do this. In order to see that everything is up and running you can do KubeMQ CTL get cluster that everything is up and running. Still not ready. This means that it's still downloading all the images. Up and running all of them. You can see here three for three and now we will add a bridge, a KubeMQ bridges and the role of this bridge is to bridge between the local KubeMQ cluster that is sitting on our local edge Kubernetes cluster to the GCP Kubernetes cluster with the KubeMQ cluster that's sitting there. In order to do this we will add a bridge in this menu, KubeMQ bridges. We will add a bridge. We call it bridge S3 and now here the source because we are running locally so the source will be connected to the local KubeMQ cluster which is the KubeMQ cluster that's sitting on the edge Kubernetes cluster the K3D. We will select Kube, we will select S3 and here the target will be the remote one, the KubeMQ one that's sitting on GCP and here we're going to use the address that we recorded before with the load balancer that we exposed in this KubeMQ and this will be this one with this IP address and it will be here. When we're going to do save and we see we have a bridge from local KubeMQ cluster to a remote one let's do a deploy. We will have another manifest and we're going to copy paste here and it was created in order to see this we can do Kube, CTL, we have pods, pods, bridge running also we can see if we want we can do KubeMQ, CTL, we get common logs you can see that we have here and we see that on one hand we connected to the local one and the target one. Now we can go to the third step the third step is breaking the KubeMQ sources as a standalone application, in this case it's a Windows one, from one hand these KubeMQ sources will listen to a local folder that we're going to set up and we'll send the messages, the files that in this folder to the local KubeMQ cluster that's sitting on the edge device. Here we're going to do it in two steps, one is creating the YAML file for the KubeMQ sources for the configuration and also we're going to expose the KubeMQ cluster that's running locally to the port that the KubeMQ sources will be able to connect to, so we will start again by this one with sources and here we're going to add a source that is the file system source and here we're going to have this setting, the setting here will be what will be the source folder names means that what is the local one here we're going to use the E demo S3 to this file, sorry to this folder we're going to upload an image and then this is what we want to see on the S3 buckets, we're going to set the buckets, this is the bucket's name and you can see here this is empty, this is the buckets on KubeMQ targets, nothing is here and the targets here will be a local one so we're going to do this one and also a local host again this is because we are running it as a standalone application it's not running on a container this is another advantage of KubeMQ connectors that can run also as a connector as a windows or linux or any kind of other file in architecture types, here we're going to do S3, save and we're going to deploy, now since we are not running on Kubernetes we need only the URL of the configuration, here we're going to use going to the folder of KubeMQ, sorry sources, sorry, CD sources and we're going to run KubeMQ sources and here we're going to take only this one but before we're going to run it I'm going to do some pull forwarding, KubeMQ, CDL, send cluster proxy, here we go, do for all the ports and here we're going to run it and it's going to start running, it would try to connect to KubeMQ and here it's connected and running, so this is step three, step four is a simple one, let's move to some files on this folder and then what we're going to see is that after a couple seconds all the files will be disappeared because they're going to be moved actually going to send will be sent to KubeMQ local one and then through the bridge to the remote one on GCP then to the targets and from the targets will be saved to S3 buckets, so let's do this I'm going to put some files there with some some images of five images, so each one is one to five and then we can see here after couple times it was disappeared and we can see here if it's we have a problem and let's see on the S3 if it was sent and here they are all of them, all the files that running on was in my edge device now are on the S3 buckets, this concludes our demo, if you'd like to try KubeMQ you can head to kubemq.io quick start this is also conclude our webinar, thank you for watching and hope to see you soon again, thank you