 So I'm going to tell you a bit about manager modules in a very kind of high-level way. I also didn't have much time to prepare this presentation, 45 minutes last night. But anyway, the whole idea is, I'm not sure if you guys are familiar with how SAF works and the fact that we have a SAF manager demon. The idea of the SAF manager itself is basically for monitoring and management while at the same time allowing for an interface, to be an interface for external tools. For instance, the SAF dashboard runs on the SAF manager. But we also have an orchestrator that kind of has multiple backends that will allow us, for instance, to interface with Deepsea, with Rook. And it also, there's a bunch of modules nowadays and one of them, for instance, will allow us to plug into some disk failure prediction service, things like that. And it also aggregates most of the data. The statistics data that the cluster is producing. And it's also nowadays mostly responsible for showing the, or to provide the health data and the health checks for a bunch of components in SAF. I technically, the clients themselves are also connecting to the manager, depending on their purpose. But for health and all of that kind of stuff, they will grab that from the manager. They will still speak to the monitors. The monitors will speak to the managers. The OSDs speak with both. But is it talk about the modules themselves, not the manager? So I will just focus on the modules. And the manager, to support this kind of modules, they implemented a very neat thing with CPython, in which each module will be running on its own sub-interpreter, which apparently is something that is, may or may not go away, I'm not entirely sure. We're the only users of it, apparently. So a solution will have to be found to deal with that. But the neat thing about this whole thing is that now we can have Python codes running on the manager, which is basically C++, while having access to all kinds of juicy information, like in real time, like the maps, the health information, the PG statistics. And people are doing all kinds of neat things with that, since the Prometheus exporter is exporting data to Prometheus, which can then be, well, do whatever you want with it. Implementing a manager module is fairly straightforward. It basically extends the manager module base class, which handles all the hard work. We really don't have to deal with much. And it will basically call out certain functions that the module needs to implement to do its stuff. Essentially, we will implement a handle command function to handle commands that we pass on the CLI. A shutdown function that will be called when the manager or the plugin is being shut down. A notify, which will basically receive every single notification that the manager is providing, including health information, and serve is basically main, to some extent. So, anything outside of the basic initialization of that class should be done in serve, which will be called frequently, so, well, some care may be needed there. Handling commands is kind of straightforward itself. I'm sorry for the contrasts. This is a screenshot of my VIM. But the idea is simply that we declare which commands we want to use, and then the handle command will be passed those commands, and we will simply parse them. However, Ricardo, I think, implemented a new version of how to do command parsing or command handling, in which case we just have a decorator that specifies the command itself, and we can simply use that function. The manager will call that function directly, which reduces a bunch of the, how do I say, cruft in the module, because we don't have to describe all the commands and then have a handle command function that's just calling out a bunch of other functions. And it's neat, because even in this case, the arguments, if we didn't have the arguments, we would not have to specify them. The inBuff is basically what we provide on the CLI as an input file. If any of you are familiar with SEF, the SEF tool will allow you to pass files with a dash i. So you can just put the file there, and it will just send the file with a command. So the inBuff is basically where you pass bulk data in the CLI. The thing is, while I was trying to figure out how to get into understanding the plugins and what not, I had no idea what to do, so I basically made blinking lights. For that, I used the Philips Hue I had at home. I have the bridge here. I brought some LEDs. I have a VSTART cluster. So this is not something that I'm using in production or anything yet. But I also had a very patient flatmate. And I just created a very basic module that will allow me to blink lights or change the colors of lights depending on the cluster status. I over-engineered the configuration file. Again, this kind of really sucks. I'm sorry. But basically, the file allows us to specify a bridge with a bridge name, the bridge address. The U itself needs a user, which I still need to implement that thing, but it's basically you press the button, call an endpoint of the REST API, which they use in this case. It's very neat because they expose a neat API that we can just call vREST. And then it has this concept of groups. Basically, it's groups of lights. And I have this group called rack on which I decided, okay, so for this status, so this will be health status. When I see this status, I want you to turn colors to green, yellow and red. And depending on each one, I'm specifying whether I want a solid color or if I want them blinking. So far, this module only supports green, yellow and red, because you actually have to specify the U, the saturation and the brightness. And I'm very lazy to do more than just three colors. And then we basically set up the bridge on SERV, like the actual initiation of the bridge, connections and all of that. We use Notify to grab the health information. And hopefully, when we shut down the manager, the turns will go off. It may not happen today, simply because I was fiddling with the code yesterday night. And that's usually not a good idea before a demo. So let's try to do this without things breaking too much. So, right now I have a restart cluster which you guys are not seeing and I have no idea. How, 15 minutes left still? Do you know how I can put this on the screen? Just mirror your screen? No, this is a multi-monitor add-on. But then I won't, okay. I dragged the shell where though? Where's my... Oh, okay, so I can just do this? Oh, there we go. Okay, so I actually have a vstart cluster here running. And on the other end I have the DHCP running because this is a very weird setup. So right now the cluster is on health okay. And I have this neat script that I just had yesterday that I can show it to you. I think, you know, Jesus, what am I doing? Blinking lights. So basically what I'm doing is forcing weird statuses on the cluster. So I know that a warning will be issued if I set a flag on the OSD map that is kind of a bad idea. Like saying that the OSDs are not allowed to be in the distribution. That will turn the state to warning. And if I mess with some of the full ratio flags I will get errors out of it. So this is basically what I'm doing. So let's just turn this to warn. And hopefully this will change lights just a bit, okay? Oh, I... So yeah, okay. So this is now a warning. I'm incredibly proud of this. It's very simple and very useless, but it's light. What? Ah, that's a good point. Sorry, what does health look like now? And it just shows a health warn. Knowing flag is set. So, yeah. That's it. So let's go with, okay, again. And it will turn green, yeah. Okay, so health is okay again because you don't have that pesky flag set anymore. And let's just force this to R. And it's blinking. So yeah, the cluster health will be in R because the full ratios are out of order. And let's put this in okay again because I'm still trying to figure out how to just keep it blinking. I'm very certain that I can just force this on serve by keep just telling the bridge keep this blinking until someone has a seizure. But yeah, let's move this to okay so that we can not freak out. Okay, can I move this back here? Okay. So this was the fun part of the presentation, but I was told that I had a profit there. So where to go from here basically? I actually have a few things in mind. I want to have the colors pulsating while pulsating, pulsing, pulsating. Whatever. You got the gist. While the PG's are recovering, like going between the yellowish and the blueish until things get to bright green again. I would very much like and I don't think it's very that very hard to make this crush map aware. So that we could ideally in a perfect world have bulbs on top of racks and actually show which racks or where the OSDs are going. A wire or which OSDs are unbalanced or where in the data center or wherever things are not as they should be. Because everyone likes blinking LEDs anyway, so why not lights as well? And given that the manager also allows for module communication between them using a very obscure function that should definitely be renamed, Richard. We could grab an inventory from the say the orchestrator once that's actually working or things like that. Or we could have the orchestrators themselves just say okay blink this OSD or this rack somewhere and we could just blink things. Obviously I'm using Philips Hue but I'm sure there's a bunch of other things that also provide REST APIs. This was just the hardware I had available and I in no way am advocating for people to do things with Philips. They don't pay me to say those things. So questions and I still have 10 minutes. It's either questions or a break. You choose, you decide, I'm okay either way. So, no? Okay, we have two apparently. Repeat the questions, yes. If you find some way to modulate or transcribe things, sure why not? Yeah, text to speech. Why not? It's not that hard. That was not the purpose of this one. What? I repeated it. I didn't repeat the question. Okay, so the question was whether we could have basically a text to speech for the health warnings. Yes, you get the health warning. So the health warning comes actually like the health warnings we would get on any other part of SEF. It's a JSON with the health status and all the checks. So for every single thing that is wrong with the cluster we will get a status message in the state itself like PG degraded or OSD down or something like that. So we could technically just grab that information. We could put that on a text to speech thingy and blast that on your sysadmin room or something. Lens, you also raise your hand. I'm not sure if you still want to ask anything. I've got the question. That's fine. But it was a letter going to be on the same network. Right. The bridge needs to be somewhere in the network. But yeah, well, I'm sorry. The question was whether the manager and the bridge needed to be in the same network. The answer is yes or find a way to expose the rest API of the bridge to the manager. So yeah, please. How does the manager deal with the insert or crashes if there's something otherwise untoward? The question is how does the manager deal with a module that crashes or does something untoward? Okay. In my experience, what's happening is that the manager will output a lot of obscure information that will make it very hard to debug what's wrong with your module. But otherwise, given it's running on its own sub interpreter, it should not affect anything. I am told that the notify function where I'm actually dealing with recipe PI calls should not be used in that way because it's supposed to be a very quick call and that other modules more complex like the dashboard, they have their own notification queue that will eventually deal with that stuff. I was lazy and also I was not doing this for a lot of complexity and all of that. But I'm assuming that if, well, I actually have seen it when the bridge has turned off, the notify function and the manager will go haywire because it can't establish a connection so the module keeps throwing problems and blocking the manager itself not so much because I still keep getting notifications so I'm guessing the manager has a bunch of other threads that are just calling into the module so it should not be affected. But that doesn't mean that it will not cause some complications basically because we still have a bunch of modules running on the manager and I'm guessing that things can go wrong but other than that, I can't speak from experience. If that answer was not satisfying, I can point you to people that will very easily answer your question as long as he looks up from his screen. Yes, please. The question was, I mentioned something about Python going away? Okay, so okay, I got it. So apparently the C-Python libraries or the community around it realize that not many people use sub-interpreters so they are... I think Ricard would be the best person to actually answer this. Do you want to... So the question itself is about the sub-interpreters and the impact that not having them will have on the self-manager. And you may need to turn off the mic, I'm not sure if it's turned off. The problem that we are currently facing is that we are using these sub-interpreters feature of C-Python where you can have sub-interpreters where each module is running on its own sub-interpreter. And some of the modules use some libraries that are built with C-Python. And the C-Python guys have removed the support for sub-interpreters in the release 029 which breaks our stuff. So we are working on a solution for this problem. But the modules themselves are not going away, the support for Python is not going away. It's just a solution is being worked on. And until then we are just forcing an older version of C-Python. I don't think we will have... Two more minutes, so if there's another question. We'll see you in that way, so thanks Joao. You're very kind.