 So thank you for coming and thank you for the organizer. This is the first year here in FOSDEM and I'm very happy about that. So today we are going to speak about how to the back-end trace our RabbitMQ node. I will show some things, some RabbitMQ internals. Please don't try this at home or better don't try this in production. This is just to understand how powerful it can be, the airline tracing and the bugging. So they already say, already said about me and I'm currently working on RabbitMQ in Kubernetes. Forget to say that you can win this puppet today. I will ask some complex questions if you will answer correctly. You can win one of these. Just naming for the people that don't know anything about RabbitMQ, wireless, etc. Do you know RabbitMQ? Raise your hand. Lot of people. Do you use RabbitMQ in production? Is there someone that does not use RabbitMQ? You deserve this one. And the win, as you can see, is extremely hard to win. There is another one, so be attention. So RabbitMQ is a message broker. The interesting part for this talk is that it's writing in Erlang and the command line in Erlang Xeer starting from RabbitMQ 3.7. A node is just running Erlang and Erlang Xeer application. The BIM is the Erlang Google Store machine. So let's start speaking about analyzing a Linux machine or a server. What you usually do is to connect remotely in the server, analyze the processes, analyze the memory, kill some process if needs. Maybe the process is using a lot of memory, a lot of CPUs, and you want to just kill it and restart it and run your script inside the server. So what if I say that you can do the same with an Erlang application? So the remote connection in Erlang is called the Erlang remote shell. There are several tools inside the BIM, like top that is called, usually there is an A, a top, a prof, et cetera, et cetera inside the Erlang network machine. There are several tools to trace the memory to understand what's going on inside the BIM memory. And you can execute a custom code inside the node. And one of the most interesting features I think that is that you can load dynamically the code inside one BIM. Even if the BIM is remote, you can load or you can send code to the to the restore machine. I usually use this kind of the feature when I have to work, for example, inside Kubernetes because in Kubernetes, maybe I have to test some Kubernetes API. And I don't have all the Kubernetes locker and I have to test it remotely. And this is one of the best features I think for my personal opinion. So I will show you a demo live because I like, I am brave. And I like to show some things live. Here I have a RabbitMQ node running. That is this one. Can you see it? All right. The host name is RabbitMQ Leap. My host name is Leap because I just don't spend almost one hour to decide the host name of my machine. And I need the default one. I don't have fantasy. And we will use the Observer tool. We will access inside the RabbitMQ database configuration because for the people that use RabbitMQ, I think that if you want to look inside the RabbitMQ database, it is extremely hard to look inside the database. And we will call an internal function, we will try it and we will load a custom model. So let me, okay. Let's try this. So the first one is that we are going to access to the remote RabbitMQ node. As you can see here, the local node, my local node is called the bug but I can call the local node as you prefer. Can you see it guys? That's better. Okay. And here now I am inside the RabbitMQ node. It's easy as you can see. Too much easy. So the first tool that someone I think that know is Observer node. This one, this one. So the first tool is the Observer that gives you an overview in what's happening inside your virtual machine. The number of cores, the memory, you can check, for example, the memory utilization, IO, et cetera, et cetera. This is the processes that you can order based on the memory usage or the number of the reduction, PID, et cetera, et cetera. The interesting things here is this one. When you go to inside TableView, you can analyze the ATS table and the Minasia tables. RabbitMQ uses Minasia only for the configuration. This is a common mistake that someone thinks that RabbitMQ uses Minasia to store the messages. RabbitMQ uses Minasia only for the configuration. In this way, you can see what there is inside the database. And maybe you have some problem with some queue and you want to check if the queue actually exists or stuff like that. Please don't touch the database. Just use the database. You shouldn't never touch directly this database. But if you want to try it, maybe just to break it, you can do that. So it's easy because RabbitMQ is enough to delete the Barlib Minasia, Barlib, RabbitMQ, and you create, RabbitMQ, you create from scratch everything. Another problem with the Observer is that it requires the graphics library. And for example, when you have a remote server in Kubernetes, you don't have the graphics library. This is useful especially when you work locally. There is another tool that is called Observer CLI. That is, let me try to do that. Can you say it? Okay. This is exactly the same for Observer but it's just using command line. So you don't have any problem with custom libraries and you don't have to install anything. Again, this is another great way to analyze the RabbitMQ node. But you can install this in your Erlang or Elixir application if you want. You should. You must. Okay. So now let's try to call some internal function inside RabbitMQ and let's see how to trace it. Let's suppose that I want to create for some strange reason a queue. Too much. Excuse me. Okay. So this function, RabbitMQ, blah, blah, blah, declare, et cetera, et cetera. This is the internal call. So the internal RabbitMQ call to create one queue. So we just created one queue using the internal calls. You don't trust me, right? No. This one. Okay. You trust me now? Okay. So let me check the time. I still have time. Okay. So now let's suppose that you want to trace a specific call inside RabbitMQ but this is in general. If you want to trace one call in your BIM, it's enough to start DBC. At the moment, I don't want to spend too much time on DBC because it requires another kind of the session. But just to let you know that you can just start the DBC and this is the pattern that's the call that they want to trace, that is the clear with all the parameters. And this is the, I want to trace only the call, the function calls. Now let's try again to create another queue. Let's call it, for example, tree. Here, as you can see, there is the full stack, the first one and the second one and the return function. Everything, you can do, everything lies. So the RabbitMQ node is still up and running with the second queue. Okay. Don't forget to stop. Oops. Don't forget to stop the DBC, especially if you are trying to trace a production node because the DBC is not totally free. If you are inside, for example, a loop or some complex function, the DBC can use a lot of memory, a lot of CPU, et cetera, et cetera. Now we trace, we are tracing the call using the console, but you can use file that it's better. You can use TCP socket, stuff like that. Now, we can do some things more complex here, that is some things like that. Okay. This one, for example, let me, I just created 100 queues. Okay. The time refresh. 100 queues inside the using always the same function, but just using a loop. Okay. But when you start to work using console, it's a bit hard because, okay, you can write function, but start to be extremely complex when you want to do some things more, more complex. So this is what I want to show you. I have here this Erlang file. Okay. That is inside my local machine. Okay. Okay. It's enough to add the part and load the model. Now we have just a new model inside RabbitMQ, a custom model inside the RabbitMQ with a running node. I think that this is extremely, extremely cool. Now, say hello. The beam is very cool. Do you agree? Yeah. Who was the first one? That one. Cool. Okay. So in this way, you can, as I said, add or remove code, etc, etc. So again, be careful for what you want to add in your, in your system. You can also remove and delete the, the, the model when you have finished it. Or if you want, you can write your own custom plugins inside RabbitMQ. Why not? Okay. Another thing is when I say that you can do what you want is some things like that. So you can also stop RabbitMQ and done. So that's, and you can start again in this way. Okay. I don't have the, the queues anymore because I created the temporary queue. So for the people that know RabbitMQ, when you start, stop and restart RabbitMQ with a temporary queue, you don't have the queue, the queues anymore. So when you have finished it basically with your trace, the bug and stuff like that is enough to just kill the node and, and it's done. Last note, you should use the hidden, hidden, the hidden parameters that is when you use this common, the hidden you, okay. Thank you. You will connect to a RabbitMQ or another node and your node will be not listed in the nodes code that this issue should be the right way to do that. Now I have only five minutes. As I say, it's, I worked recently to one new feature in RabbitMQ Kubernetes plugin and I had to use a lot because I don't, I didn't have the Kubernetes in local and I couldn't try this function in local. So this was the best way to work. After that, I copy and past the function and it, and it worked. So security, someone is thinking, I think that, oh my god, everyone can access on my module at this time. So don't panic because in order to access a remote shell, you need to have the part mapping part open and you need the Erlang cookie. Erlang cookie is a sort of secret that you shouldn't share with other people and it's enough to just to enable the firewall and speaking about RabbitMQ is enough to open the MQP part and you should use the remote access only inside trust network. Don't try this in production. For example, if you want to play with RabbitMQ, you can do that. But in order to create the queues and exchange, you should use the standard API because there are several contrasts around queue creation and exchange creation. So play with it, but when you are in production, be careful you should, you have to know what you are doing in production. So I just have more or less a couple of minutes. I finish it, but I want to point out some things that is extremely important for me and I think that the Italians guy, they will agree with me because when I go around, I see strange things marked as Italian dishes. I want to point out that Spaghetti alla Bolognese does not exist. I am from Bologna and trust me, it does not exist. The second one, Spaghetti non-populated does not exist because it's kids. Fettuccine linguini Alfredo does not exist. The question is, who is Alfredo? Because we don't know Alfredo. Do you know Alfredo? And other things that the linguini does not exist. It's not an Italian word. Linguini may be linguine, but linguini does not exist. And the last one, Italian cappuccino, only for breakfast please. Not for lunch, not please, please guy. So a few links that can be useful to trace and these are external tools that you can use if you want to just play with it. If you have questions about food, wine, et cetera, feel free to ask and I finish. Thank you. Of course, if you have a question about Raptem, Q&B or stuff. I will skip the food part. I know you're working a lot on Kubernetes right now and I wanted to know if all that you have shown right now will still work in the context of a container and most of all, if you ever experimented with the Dispolas containers, which is one of the things I'm working on the most. And I'm kind of interested in how does all our land shell plays with the Dispolas containers? I didn't test it. I usually use the standard, okay, if I can use the remote shell in the Dispolas containers. So I didn't test it, but I think that it should work. I don't see any reason why it shouldn't work because Erlang, it creates its own virtual machine, the port mapping and it shouldn't work, but it's a test that I'd like to do. I usually use the standard Raptem Q image and work up the standard Raptem Q image. This is because I don't want to create another standard, you know, this is another standard. I think that for my experience, the creation of new node is enough fast, so why do you want to use this Dispolas less? Is there any reason? All right, so speaking about Kubernetes, in general, I didn't have any special kind of the problem. The problem is only to access in Kubernetes once you are inside. I didn't have any problem. Christian? So let me say if I have some story about the bugging, etc. Let me say that I spend more time debugging the Raptem Q than write codes. For example, there were a bug in the garbage collector because for the people that work with garbage collector, the first thing is garbage collector is very cool because I don't have to destroy the memory, etc. After a couple of months, how can I force the garbage collector code? So I have to find the issue, but basically there was a problem with the garbage collector and we introduced parameters that each X function, the garbage collector is forced to call. And I spent time with tools that is called E-Prof Erlang, this one, that is a timing profiling for Erlang. So you decide to trace one specific call and you can see which function is called at most and you can decide. In this case was the garbage collector because the garbage collector was inside the function for some reason and we decided to call it each, I don't know, X number of the reductions. Other stories are mostly about Kubernetes and another one interesting is that I found a bug in LazyQ. Do you know LazyQ or Raptem Q? No? LazyQ is a kind of the Q I found about when you switch from the normal to Lazy and using the DBC and about logging in file because it was a very high stack. Using the DBC I could find the problem and I could resolve the problem just was hard but I could resolve the problem analyzing the trace. Okay, thank you.