 I will talk about the mbbox project. It was a CP initiative. My name is Michal Koneczny. I'm part of the CP team. I work as software engineer. It was initiative, mbbox was initiative in the second quarter quarter for the CP team. It was one of them. Okay, so let's start. This presentation will have three parts. First, I will talk about what is mbbox. I'm sure that plenty of people are not familiar with what it is. Then I will talk about the team behind mbbox. And lastly, I will talk about the actual work we did. So, let's start. Okay, so the mbbox was originally created by your former colleague Patrick. I don't try to pronounce his last name because I will fail. This was originally a Python script he wrote to actually have an easy way to deploy the mbbox itself. He did this manually before and he decided to write a Python script. It was actually just some script that was calling an original client and doing the work for him. Not anything that actually checked if everything is working and how it is. Okay, the mbbox itself has a few parts. It actually has five parts, but there are two applications in it. The first one is Koji, which manages the module builds but doesn't do them actually. This is done by the MBS. The Koji has three small components. There is KojiBuilder, which is used in the mbbox for creating new repositories. There is KojiHub, which is web interface and you can actually see what is being built. And the last thing is Kojira, which is working with different build routes. Then we have MBS, which has two components in the mbbox and is MBS frontend and MBS begin. MBS frontend is actually just this Apache application that is web interface for the MBS backend and the MBS backend doing the actual work. They are communicating by sending messages between them and they are using FedMessage. And we tried to use FedMessaging. So it was something about the mbbox itself. The system, the whole system is created to actually have the easy deployment of the module building system in any infrastructure you want. And now about the team that was working on the initiative. We were three. There was Adam Saleh, Leonardo Rossetti and me. You can see our IRCNIC if you have any question or something about the mbbox. Just ask us on the free note. We are usually in Fedora Apps channel. Okay. What was our goal for this initiative? Our goal was to make deployment of mbbox that is easy to deploy in the open shift. We wanted to replace FedMessages with Fedora Messaging. We wanted to use existing certificate authority and existing messaging system because the original script deployed their own certificate authority, their own messaging system. So it had plenty of things that could be used from the actual infrastructure that is running. We decided to work on it and why we decided to create an operator. For those that doesn't know what is an operator, it's just some unseable playbook that is deploying the wall solution you want. We wanted to go with templates in the original request, but decided to not go with it because the templates are deprecated in OpenShift 4. And there was aim to be deployable in OpenShift 4. The operator worked both on Kubernetes and OpenShift. There are some things that are not in Kubernetes but are in OpenShift, but you could use both. Our operator was actually tested on the Kubernetes the whole time. We tested on OpenShift at the end of your development cycle because we needed someone to provide us free note. And the Kubernetes was easy to test. The operator is configurable by admin. You have custom resource files that you can actually edit and add your own configuration. We decided to go with the full configurable operator. So you have ability to actually switch to your own configuration for any component that is part of the mbbox. And there is prefinit deployments of pods, which was what I was talking at the start. The operator is deploying the wall thing for you. In our case, we don't have right now something that will do all the work for you. But you need to deploy them one by one. But everything is designed to work together and tested to work together. There are plenty of technologies we use when we work on the operator. The technologies are, some of them were new for me because I never worked with the operator before. The MiniCube was one of them. I had some experience with OpenShift, but not with the Kubernetes. The Zool was new for me. It was what we decided to use for the CI system. We wanted to go with the CentOS CI, but the Zool allowed us to have more freedom we needed. We write the wall operator into Ansible. We decided to go with Ansible because the team was familiar with it. And this is the second most major operator language you can use. The first is Go, but we didn't have any experience with Go, so we decided to go with Ansible. It's widely used in the CP team, so it will be easier for anyone to manage it or maintain it if there is any need for it. We used Molecule for testing. I never used it before. I didn't know there is such a nice tool for testing the Ansible code. It is providing some data for the deployment for the Ansible playbooks and then trying to just run them. And you can assert if everything is done how you want it. The test when you want to do the wall deployment takes around, I think, 10 minutes. So it's not something you will want to run every time. But it's good that you can actually test if your Ansible code is doing what you need. We used the Quay.io for image hosting. The most problematic in this was Koji because there isn't any actual containers at Koji. So we created one in this project. So you can find Koji Hub or Koji Builder on the Quay.io. They are publicly accessible if you want them to use for anything else. They are there. We used background for the development environment. This is used in a few projects and it's nice that it creates everything for you and you can actually work from just do background up and then work on the project itself. So let's go for the next slide. So we started by creating the development environment. This actually started before even the project itself and the initiative itself. The Vagrant is running Minikube and SDK, which is operator SDK. It's designed, it's a tool that is actually creating the roles for you and you can use it to test your operator. There also was a second environment in Vagrant. They are called boxes I think into Vagrant for the Python script itself. So there were two. We decided to remove the Python things at the end. But if you look at the history in the repository of mbbox, you can find it. So if you ever want to run operator SDK development environment in Vagrant, you can use our repository and our unseable playbooks for deploying to Vagrant with it. There is also manual deployment, which is described in our documentation. I will have the links at the end of this presentation. And I could put the link to at least... I can put the link to the GitHub repository in the chat so if anybody wants to look at it. There will be a link at the end of this presentation, but I'm not sure where I actually have it uploaded to my GitHub repo, but I'm not sure if it will be uploaded somewhere in the nest. I see the question. This is one of the things that we didn't do yet, because right now we gave the work we did to the CentOS string team, which was the one who originally needed it. And we are waiting for some feedback, and there are a few things that needs to be done before we upload it to the operator app. So right now it's deployable. There is documentation how to deploy it into OpenShift or into Kubernetes, but it isn't available in the operator app. Okay, so the next part we worked on was Koji. The Koji actually took most of the time we spent on the mvbox, because we decided to go with separate roles for Koji components, the Koji Hub and Koji Builder were very difficult ones. The Koji Hub was never done to run in the container, so we needed to solve very much of the issues that there was. It was critical because every other component that is actually in the mvbox is communicating with it. And the Koji Builder was the first we tried to get it done, and it was hard to get it actually communicated with the Koji Hub, because of the certificates and other things we weren't aware of. We got some help from the Koji team, which was really helpful and really helped us out. And when we get these two running, the Koji was an easy one. So this was the most problematic, and even the MBS didn't take us that much time. So the part three is the MBS. We have for both of the components are separate roles. All of this is in the GitHub repo. You could look at it. It has two parts. As I said, there is MBS front-end and MBS back-end. We decided to create also some shared role for some of the components, and most of the components used Koji Montpoint. So we decided to create a shared role that is deploying this only once, and not for every component. The MBS had plenty of shared configuration options, so we decided to create a shared role for it. So it was very good to have this because it cleaned the code a little and helped us to get the things faster deployed. And the last thing we did was to give all of this together. We created a role for the shared attributes, as I said before. We updated documentation so documentation should be up-to-date. Everything into documentation should work. If you want to try it, deploy it, there is a guide how to do it. If you want to contribute to it, there is a guide for it. And there is also a description of every configurable option in the MB box. So you could look at any component you need and just look what every configuration option does and what you need to set there. There are some default values. So if you want to just deploy it, you can use the default values. It will actually deploy for you the certification authority. If you don't specify one and it will also deploy their own FAT message hub to a rapid MQ cluster to be able to send the messages. If you don't specify one. The last thing we did was to testing deployment in CentOS OpenShift cluster. It took us a few days to actually have it deployed because there was some issue we didn't solve before. But at the end we got it running. We didn't let the CentOS stream team to test it because they know how this should work. We actually did only the deployment. And it was really nice to have it and see that everything works like it should. Here is the things that we want to work on next. There is another initiative in the CP backlog that is called the MB box phase 2 which will take care of this. Not sure if we will be in the same team will be in this but we will see. So the first thing is make it available in public operator repository. We want to be sure that it's working like it should. So we are waiting for the feedback. We want the automatic image build on the radius of components. If there is a new version of Koji or new version of MBS we would be glad to have automatically built image and uploaded to the Quay IO so we could just download it and use it. Automatic update in running operator is something that is possible. We didn't have it right now because we don't have the automatic image building. It doesn't make sense. So if you get the image there it should automatically deploy but this is the future I think not working right now. We need to do some OpenShift optimization which will be based on the feedback from the center stream team. Fedora messaging support for MBS. MBS is still using Fed message and there are plans for Fedora messaging but didn't have time to look at it yet. And the last thing is the master component. This should be a component for deploying the whole thing in one step. Right now you need to do deployment by component by component but this should help you to deploy everything at once. I think this is my last slide. I will look at the question. I see from Neil that there is a disk kit for a solution with disk kit because this was requested by the center stream team. They didn't need a disk kit. They just wanted a module building so this is why it is. But in the future there is option that we actually create operators for another applications we use in the infrastructure. It will be much easier to deploy them in our own infrastructure but this is just an idea. I'm not sure if we will go with it and where we find the time to do it. There are no other operators for the other services. Just like I said it will be nice to have them. This was the first operator in the CP team we created so it will be nice to have an author for, let's say, disk kit or not sure about the other data wrapper, data number, some other services. Pagore operator would be cool. I'm not sure if this will be part of the CP team because we want to get rid of Pagore. We want to just use the... Oh, not to get rid but we don't want to make any so I'm not sure if we will do any other work for it. But it will be a nice idea, Neil. At the first glance the operator looks pretty tricky. If we started looking, I was really glad that we have Leonardo in the team because he actually had some experience with Kubernetes operators. But at the start I was just lost. I didn't know where to start, what to do and just tried to use the work that was done in other operators so just work, go with it. And simple playbooks for Pagore. As Leo said, it could be used but it could be used for the operator but for some degree. We actually... I'm not sure if we used any uncivil playbook for the operator for MVBox. Even if the code is actually... I think only we looked at them but didn't use anything from them. But yeah, because it's uncivil-based, you can use at least some part of it, some configuration, some deployment things. You can use the deployment config. You can use the secrets definition. You can use... I'm not sure what is... ImageStream is only usable for the OpenShift so if you want to use... Or ImageStream you can but it is... If you want to use it also in Kubernetes the ImageStream couldn't be used. You can also use the... I'm not sure how it is called the networking configuration and the configuration itself. The configuration of the app. How this config deployment, I think. Ah, deployment config, yeah. Yeah, not sure. But I think the operator hub is actually managed by Kubernetes team so I think you need to be... You need to work in Kubernetes... The OpenShift... The Kubernetes operator actually works in OpenShift out of the box so if it's working in Kubernetes it should work in the OpenShift. Okay, we... The time is gone for us so thank you everyone and... I will just post here one... One other link that is for the blog post I wrote about the MB Box itself. You can... There should be the same information I just shared with you. You can look at them. And thank you for attending this talk.