 Okay, so welcome to the Dapper Day 2024 virtual conference. The subject of today will be Dapper and Cubemcube. And so we will do the presentation with Vior. Hello, hello, everyone. My name is Dior. And about I'm a founder and city of Cubemcube. And you can find me and connect me in GitHub and also with email at cio.nabat and at cubemcube.io. Very happy to be here. So me, Jim Flesch, I'm responsible. I'm leading the .NET team at Alio Hope. You can join me at jim.flesch at alio.be. I'm also owner of three words, available open source, available on GitHub and also working on a guidance tool, which is available on the Visual Studio Marketplace. You can reach me to the email address info at the full .NET and at GFish for you on Twitter. So the first thing that I would like to we would like to say is that we did already a presentation on the Dapper Cubemcube 79 presenting Dapper and Cubemcube and all the work that was done by Dior. So please check this first. It's also valuable information. And on the agenda today, the purpose is to really show how to work, how to use Dapper and Cubemcube from a dev machine to a Kubernetes environment. OK, so it will be with .NET 8 and even Aspire 8. We will spend sometimes about set-uping everything, but also how to build a microservices application and which are the best practice regarding Dapper and the configuration to use it correctly from the dev to the production environment. OK, again, what is Cubemcube? Cubemcube is an enterprise-grade message broker and message queue. It's very scalable, high availability and secure, meant to be run on native Kubernetes cluster. Cubemcube has four components in the ecosystem. One of them is the main one is a Cubemcube cluster. It's a very small, fast and lightweight cluster of a messaging broker. And we have three families of components. One of them is the Cubemcube target that is open source that able to connect Cubemcube cluster to different sources like databases, cash, many others. What we call the third party targets. The second one is sources, meaning that it's a component to ingest data into Cubemcube. And the third one is what we call Cubemcube bridges that able us to connect between Cubemcube cluster in order to provide some hierarchy and able to build a complex architecture of Cubemcube cluster together. What I'm going to show you now is how to set up very quickly Cubemcube in your desktop. We're going to use the Docker version of Cubemcube. A binary will be soon, a direct binary will be soon be available. And now what I'm going to show you, I'm going to show you the whole flow of how to run it. So in order to get a Cubemcube key in order to run Cubemcube locally or in your cluster, you can go to cubemcube.io and click on the try free button. When you will click it, you will be redirected to the account page. Here you have three options to sign up. I'm going to use the Gmail option. I'm going to click on it. And after two, three seconds, I will be able to register. And then we have the options to how to install Cubemcube. I'm going to use the option with Docker. I'm going to click on the run Docker and just pass it to my terminal, run it, and that's it. I'm going to go back to my web and I can click on the dashboard. And here I'm getting the Cubemcube dashboard in a couple of seconds. Okay, fine. So me, I will now present, so I will share my screen to show you what personally I'm doing also. So I'm back to the presentation. You see that me personally, I'm mapping a volume, meaning that if I show on my local deaf environment, I click on the dashboard of Cubemcube. We can see that I have already some topics that exist. So I will just show you that how easily. And this is something that I like because when we develop, sometimes we need, we want to have something really clean to understand the different message that are sent. So what I just do by having this capability to map to a local storage so I just stop here, I go to the folder where I've mapped. There is a store which is created. I just delete it. So if you look at the store, you will see that there is somewhere binaries to store the information from the messages that are used by or sent to Cubemcube. So I will just delete it. And now when I restart my Docker image and I click on the Cubemcube dashboard, see that I have a fresh image with no topics, no channel. Nothing is there. And so I can start with a fresh image. I can compare when I am playing the application. We will go back to the dashboard in the next part of the presentation. So concerning the cell-based architecture, at Avia we are analyzing the business needs and we are decomposing this in three levels. The lowest level is the level that we will use to start to build the cell, which is a set of microservices that will just implement the business needs. We have an API gateway and a messaging platform, which is used by the Cubemcube to exchange the messages between the different cells in the cell and outside of the cell. The guidance is supporting this and will create everything regarding the gateways, the services and adding data to deal with the messaging platform and the cache. Regarding the cell-based architecture, when we implement this in the code, what I like is that the developer has the same configuration that he will have to build when he will deploy in a production environment. So it means that, looking on the development, all the components that we have to use for all the services that exist in the cell, I put this also in one folder and the developer has to deal with the scope and define this correctly to select which is the component that will be used for a specific services, exactly like when we deploy on Kubernetes. We deploy in a namespace all the components and we have to play with the scope to say this component is used by these services and not by the other one. So we see this in the solution that I have. So we can see we have a structure with a back-end and front-end, but we look at the back-end part and so we have different services, contract, core, mail. And what we can see is that we have a dapper folder and a dapper folder is where we will see the different configuration that will use, that we may use when we develop in development or in this case the staging one. What we see is that the same component are defined for the development and the staging. And if I take here the store one, we see that we use the scope to define which is the service that will be using this one. And so the developer has exactly the same constraint regarding dapper with the defining the good scope when you run. And so after it's just a copy-paste that we have to do and to give only the configuration in the address or the password. For the typical component with KubeMQ, we adapt the PubSub. For example, in this example, we adapt the PubSub button with dapper. And what you need to set up is as simple the address of your KubeMQ cluster or instance. In this case, we are running on localhost. Then we have two, what we call PubSub event type. One we call event and the second, what we call event store. And the difference between them, events are very fast PubSub that in memory and store is a PubSub with persistence. Means that you later on be able to retrieve message, all one, start from beginning and any kind of replay mechanism with your channel. Group is the ability to group couple clients together in order to be able to do some, for example, load balancing between services. A client that is a unique idea for the service in order to identify to KubeMQ service, say KubeMQ server, which one is the client is connected. Okay, so Leo, if I'm on a service and I just publish a message to a topic in this case, sorry, the group and the client ID is not something that I need to configure on the component level because it's not used in fact. Yes, but if you want to keep the name of the server who send you in order to see in your dashboard the details you will need to set the client. The group is on the receiver side. Okay, good. So if you have your services deployed in the communities environment that you see replicas is set to three, for example. And if you want to just receive the message that it was published by one of the instance, you'd have to define a group. Otherwise, you will receive the message three times. And so it's really important to have this concept in mind. It depends on what you want to do in fact. And when I generate the code via the guidance is that the components directory is linked to the development folder. And just I remember because we define all the services that we will start and we show you this in Aspire are linked to the same folder where all the components are defined. It means that immediately the developer is in front of the same way the apper is configuring when you deploy in Kubernetes meaning that you have on your name space a lot of components that are declared and just the scope is used to trigger, to filter what will be used for this specific services or not. And so when they are developing and they do a mistake they will see this immediately and they will be able to correctly configure the staging of the development environment. So now I will switch to the solution and I will go to the apper settings. I think here I've already cleaned this so because I'm on Aspire so there is no data for static you see before. I was using the apper sidekick locally on the development machine and we are still using it. And on this project where I'm able to run this with the data Aspire to check and validate everything. In this case, I comment this part so sidekick will not be used. And if I look at the Aspire solution here on the program I can see that for each service that exists I have a dapper sidekick. So I'm using currently the preview two the preview three theoretically is for this month February but at the moment we are doing the record of this presentation only on preview two. And you see that we define the resource pass and the resource pass is just to mention the dapper development environment and all the configuration that we use. And this is something that I use for each services and those services will be deployed at the end in communities in a namespace. So really important, I'm going to do a demo and show you the dashboard and we'll see this so it's not the moment for the moment to show that. I will again continue the presentation. No, when we go in production we have not yet for all the application dapper and incubanities so we are not fully migrating incubanities. So for the application that are not yet able to be deployed on the communities environment it's more because we don't have yet a cubanities on-prem and for critical application we don't do this in the cloud for the moment. So in this case, we are still running dapper on the sidekick as an anti-services. So we use the sidekick functionality we activate this via the settings but when we deploy in cubanities in this case we just annotate the YAML file to deploy and manifest we mention the app ID and we remove the section with the dapper sidekick and so we are able really to switch from so locally it will be on Aspire aid in production it will be all as an anti-service or incubanities and we are really flexible and we can play this easily. And this is what I like in the architecture is the capability to really switch and have a lot of flexibility. So now we will see the cube and cube dashboard and all the possibilities and for this I will just start by running the application with the Aspire and show this in live. So I will run the application. So I start this and Aspire will start everything for me. So he will launch all the different services he will launch the Aspire dashboard and we can see that the Aspire will start the dapper sidecar side by side with the different services then it will also give me the access to the app which is just a component. And when I launch this, the app will be launched I'm able to connect to my web page and we can go to the different services that exist and I can, for example, queries the list of components. That's more or less what it's done. We will see the part regarding tracing and so we have to do everything which is done nicely and I can see also, I will show you this how the logs of the dapper, I can see that the QMQ component is started correctly, so we have the secret store but we have also the Pupsup QMQ and we can see this. So we have an environment which is up and running. Oh, no, I will go to the Http so I will launch the dashboard of QMQ that I've started the application even if there is no message. I have already the services who are listening to the different topic that I create for this application and I will have a look on what is going to the user. So when we create a user and so on, there is a mechanism where message is sent, it will be created a message to say, okay, you are registered and so Leo, there is here two concepts in which is subscribing and watching. Can you explain which is different between the subscribe and watch? Yes, the subscribe is a simulation of really client subscribe to the message, like to the channel. So we actually will get the messages and watch is like a monitoring, means like in eyes, what's going on on the channel and is not looking and actually accepting the client. So it's like a meaningful debugging and all development mode that you will be able to watch on top of the channel. By the way, it's very, very useful in other messaging pattern like command and queries that you can see who is sending a request, who is responding and this is the same way how you can really watch on the channel and see what is the traffic running in this channel. Okay, so me, I will subscribe to this channel, meaning that now when I will publish a message, I will be able to see and we will see the content, what is in this message and I'm using the cloud event, so I'm using that one with cloud event and so you will see what is happening. So now I will show this in the application again. So when I have subscribed to the user, so I will create a company. So there is a message which is sent, but I will show this with the user and now I will create a user. So each one I create a user, a message is sent to the adapter and Q&Q and Q&Q to the mail services. Mail is sent, so you can see here the mail which is sent and I go to the Q&Q dashboard, sorry. And when I click because I have subscribed, I can see the message and the content. So you see the data also, it's a cloud event message. So meaning that we have the cloud event information, so in the data is my message and we can see the trace and the trace ID which is used to, for the open telemetry, I will show this in the aspire and so based on, we use also the type to after listen and based on the type that is used, the message will be routed to the services to be handled by the mail. Okay, I will show you this in the project because it's really interesting probably to show this. So thanks to that, I can see what is happening on the wire and when I subscribe, which is also interesting is that I can take a copy of this message and I can republish it. Okay, so I can just click it. I will re-send this to, I will send it and know that I have sent it but the mail is recent because the mail is receiving the message again. And so this is really handy when you are debugging your application and you have a full process and you need just to test a component, you just publish the message again and again and so you can debug your services and it's really interesting in this feature and easy for a developer to use this via the dashboard of human Q. So now if I go back to maybe the project, what I do also is that for at the level of the project, I have for each services that exist a dapper controller and I will go to this one which is not really interesting but I will use the mail one for example. And so the dapper controller is something which is with a specific group name that I exclude from the swagger page because it's only something that will be used by my dapper. So I don't want to be able to see this in the swagger page and for each event that is created I have a pulse with the root and if I look at the dapper component for the mail there is a subscriptions definition and here you can see for the type of message each root that will be used to reach the service and this is how the glue is done. So we have seen the message created, the cloud event type is also sent. QMQ is transporting this. I can see this in the dashboard and this is mapped to the good end point via the subscription. Regarding a little open telemetry concept is not really the open telemetry one but here the QMQ as a concept of also where you can see all the clients that were connected and exchanging messages via a specific channel and so it's a nice view because you can already see and have information about who is sending, who is receiving who is sending and receiving. So a nice feature to understand I can see now really the normal message are using, not republishing multiple times via the dashboard the message. I can see the gateway code with the old concept which are inside regarding the security, the contract which is called to create the user, the message which is sent via dapper and the PubSoup mechanism and I can really see the trace and all the detail about this message via the sign to the open telemetry. Leo will explain the CACI tool and what is it exactly? CACI just run it and then automatically it will open a new web page. I can see that I have the dashboard, it's currently cleaned there is no no QBIM queue. I have running K3D as a local client, local Kubernetes cluster. What I'm going to do now I'm just going to click put my license key to that cluster and that's it. We're going to wait about 10 to 12 seconds and he will bring up all the necessary components. In Kubernetes we are using the operator in order to load all the QBIM components as we have reached target sources and then in 10 seconds we can see that it's up and running. If you want to look on the dashboard you can click here and this is the dashboard of the cluster that you have. We have currently two nodes that are going up as we speak, you see the third one going up and then from here we can continue if we want to see more we can see logs, we can see services endpoint, we can do port forwarding if we want to have a local exposure to the cluster so we can set here and then automatically all the traffic for port 50,000 on GLPC will be forward and here we can have more all the settings, images torization, TLS, routing, health many, many, many things that will to run out of the box QBIM Q is running almost without any configuration and that's it. Thanks Leo, I think it's really interesting and how easy it is to finally deploy QMQ in the production environment the last slide which is the Q&A custom and also