 Good afternoon everybody. It's after half an hour. It's time for lunch. So like this this is my last talk before the lunch. So I'm going to present here about delivering a bleeding is commonly open stack distribution that is known as audio. So from last two talks you came to know what is open stack. So open stack is a collection of open source services through which you can form public and private cloud. About me, she has already introduced me. Moving ahead, let's talk about audio. What is audio? So audio, there is no actual full form. You can call it anything. Somebody calls it as a RPM based distribution of open stack. It's a group of people who rapidly deploy open stacks and if your name starts with R, it's your R ritual distribution of open stack. And currently it has more than 350 packages maintained and still growing. When I say about packages, it means open stack packages. So since audio is an RPM based distribution of open stack, let understand what is RPM package. So when you used Red Hat based distribution, Fedora, CentOS or Rail, when you do RPM minus Q, Python, you get something like name, version, release. So name is the name of the project. Virgin refers to the version of the project and release is related to the spec file. And dot RPMs is another file format. RPMs comes from SRPM which contains source plus spec file. And we recall about the source source is the actually upstream source code of the project. And within a package, there are a list of included files which got installed at particular places when install a package. And what is actually within a spec file? Spec file is another file format which has configuration file format which contains some steps for how to build a package, installation and removal procedures. What are the dependencies of a package if you want to introduce any patch with the package that it contains. The people who do packaging are known as packages. We also call them spec file developers. So before going to the job of spec file developers, what a package needs? Then it's a diskit server. So diskit server is nothing different from a source code repository server that is created from Gitulite or from your GitLab or GitHub or your own deployment Git server. At diskit server, we generally keep all the spec files. So that's why we call it diskit server. For packaging, we need a peer to peer review. When I say about peer to peer review, multiple people should review the spec file. They have a better understanding of the project itself. There should be an automated testing and validation of all the spec file in isolated environment. It does not have an internet connection. It should not down anything from outside world. Otherwise, it will start downloading. So if you try to install it somewhere, it will start break. Smart managed management. When I say about smart management, makes you review, review, sync your packages at multiple branches and it should work fine. The packages are also responsible for writing system-to-unit files so that when you are packaging a service, it makes your after installation, it starts, stopped, enabled, and disabled. And you have to make sure in the spec file that your after installation, your particular files, configuration files and executable goes at particular presses. You have to follow the upstream changes also. So when say new versions of a project comes up, makes you update the package automatically or through manually. If there is some vulnerabilities comes up, you have to patch your package and sit as soon as possible. Next comes to the packaging open stack. So you have seen the first talk from the mat. He was saying he tried installing open stack from source but it was hard and it had some problems. Like while installing, managing configuration files, propping the files at particular presses. He has itself written the system-to-unit files. The packaging takes care of all these stuffs. So this is the, so in the first March or 28th February, we have released a new version of open stack that is known as Queens. And this is the current state of project. It has more than 700 plus project. If you check the gate repository, it has more than 1800 plus. And each release, we have more than 1600 contributors. On each day, 250 commits merge in different gate repositories and we release it every six months. It's a pretty fast. So there are some extra constraints for the audio packages. They are packages but they are special packages. They need to, so they need to validate each of the spec file on a new commit merge in a project. Second, strong dependency management between packages makes your, suppose you have two packages, there might be a possibility with a dependency having particular version used in project A. Might be a higher version will be required in project B. So you have to take care of the managing that things also. Test systems is properly test and validate that package. We just not only to install the package and install it, we also need to make sure that the executables and anything which we are shipping with the package version of a project works fine. And package updates would work smoothly. So you have created a package. If a new release comes up, a new release of a distribution comes up, you are trying to release a package from lower version to upper version where things are breaking. We need to take care of that also. Makes your RDO packages works fine on three platforms, Fedora, Rail and Centres. It's a pretty hard job. So these are the current states of RDO release. I have taken the four last releases. So Newton released in... So we tried to release RDO on the same day as OpenStack got released. So if you check the... In Newton, we have released in one day. In Okata, we have released in 12 hours. In Pi, we have released in two days because some infrastructure issues and production push failures. And Bequins made it back on track in seven hours. How we made it possible? We are going to talk about that. So first thing what we have done is we have built a RPM factory from software factory and some RDO written tools and the community tools. We have started changing the trunk. When I say about the changing the trunk, we have started trying to test each of the commits getting merged in each of the OpenStack projects. We have different levels of CIs who are consuming RDO packages. Those are puppet CIs, tripler CIs and cola CIs and as well as the pack stick one. Where we test all the packages changes. And we have more scenario jobs in our pipeline so that we promote the packages from one EM repose to another repo. I will be explaining that in later slides. And we try to catch the issues from once a commit is merged and when we start building the packages. And we always try to improve our production change workflow. So when we are talking about the community tools, these are the community tools we use. First thing we use Koji. So Koji is nothing. Koji is a software collection to build in the store, RPM package in an isolated environment. And the Koji runs on a platform known as CVS. The URL in CentOS builds system.CVS.CentOS.org. And all the info is maintained by cloud. So in CentOS project ecosystem, there are different kinds of SSHC to maintain different kinds of projects. And we have another CI server also which is provided by CentOS. That is CI.CentOS.org, which is based on Jenkins. And another, we run all the infrastructure on RDO cloud, which is using RDO-Cata release. It's a production-grade OpenStack deployment in-house within Red Hat. So each of the OpenStack developer has got some quota from the RDO cloud, and they try to develop OpenStack on the OpenStack itself. Let's come to the RDO tools. So first tool is Delorean. So Delorean is the continuous package delivery platform. So once your commits merge, Delorean will pull that commit, and it has some information coming from the RDO info, which contains all the metadata information about the package. So once a commit got merged, Delorean will receive it. And it also receives the spec file from the disk server based on the RDO information. And it starts building that package from that commit. So everything will work fine. It will be promoted to a current. And current against current, we run some CI's. And after that, we promote it to Consistence. And with Consistence, repose, we pass it to the Packstack CI. If everything's passed, we make a new YAM roper that is known as current pass CI, which is passed to the upstream CI. So suppose we are building a package, and suppose a new commit got merged, and Delorean was trying to build a package, and if anything's got break, then that commit is not promoted to current or Consistence. It will again come back to the new commit. We have to fix that commit, then we proceed. So once the Delorean packaging build process failed, we have automated scripts which will create a dummy review with the logs and with the commit error saying that this review is generated because of this, and these are the things causing build failures. So a maintainer's got notified, and fix it, and we again come back on track. We have some few more tools. One other is RDO-PQG, RDO-INFO, config, and graffiti. So packages has to do a lot of manual steps. So suppose a new release comes up, the dependencies got updated. So currently lots of people do manually updating the dependencies and their versions. You can do it automatically by RDO-PQG. So lots of manual tasks for a packages were automated by RDO-PQG. Another thing is graffiti. So we also build packages for the stable branches for all the open stack projects. So there, once a new version comes up, it gets automatically updated, or if you are proposing a new change to the stable branches, you need to just take care of the spec file, there it gets reviewed. So once it's going to get most, there, graffiti will communicate with the CBS and do a automatically tag build. And if you submit a review on the stable branch, it will do a scratch build on that and making sure it works fine. Another thing is config. So we treat all the permissions, jobs configuration, and everything as a code and we keep it version. So suppose I have to update the configuration or I have to update the permissions, we will update it through the git configs. So we have a config repository where we manage all the configuration files even creation of a repository through Git. We have another tool that's known as Beardo. It's an Ansible based framework. This takes out the git jobs which runs under the open stack infrastructure and try to replicate the same thing within audio infrastructure through Ansible playbooks. The main goal of this tool is to improve the test coverage. So for the developer, if everything works in depth stack, it's fine. But for an inducer, it needs to be make sure it works on other distributions also. Beardo solves that problem. What it does, it provisions the test environment, set up the trunk repositories. When I say about the trunk, the latest repositories, then install the project and its dependency. Suppose it's a puppet project, then it will go through the same file, install the specific Ruby packages and everything, then we'll start deploying and configuring open stack. And after that, once everything is configured, then it runs Tempest test and then it uploads the logs and the results at particular server and that deletes the node. So in this process, if anything's got failed, you have to look it and make sure you have to fix it upstream again. Then whole process will return back. Another thing is Aura, Ansible run analysis. So most of the projects are moving towards Ansible. So how many of you know about Ansible? Okay. So Ansible is another config management tool. So most of the projects are using Ansible to configure and deploy projects. But if you want to analyze the Ansible playbooks, you can use Aura. So you can install it from the laptop, you can try it and visualize it. So in all the open stack projects, we use Ansible a lot while running the jobs. So there we have to analyze how the jobs configurations looks like and how the playbooks runs and what are the things happens. So this should be appear something like this, something like this. So this one is from one of my job from upstream. What are the files related with that playbooks? You can see that. What are the tasks associated with? What are failed with these status and what happened everything? Coming back to the track. So you can, it's a simple python projects. If you want, you can contribute to it. Another thing is software factory. Software factory is a cacd platform based on open stack upstream ci. So upstream ci has written lots of tools to run all the jobs, test each of the packages on the deep stack as well as provide resources to triple and other installers to test their changes so that they can restfully release a robust open stack. So like open stack has written jewel, node pool. They have get it. They also contribute to get it. And there are lots of other projects. So currently code in software factory, we have packaged all the stuff so that can in the user can easily deploy out in house a complete cacd platform. So one is get it. Another is job stretch orchestration system that is known as jewel. Another is jane kin. So they have written a replacement of jail kins that is known as jewel. Version three is quite cool and awesome. So in version three jewel v3, you have to just define a dot jewel dot amul file within your project. There you have to write your job definition with each definitions. You have to write your own scripts or ansible playbooks. And all the jobs configurations are run and invoked by the node pool. So node pool what it does, it's actually a job which manages slaves or servers for you. And based on the configuration defined in the jewel dot amul, it will reach that and from the jewel servers it get it and run on that systems. Jewel has also smart gating systems. So it makes your suppose you have submitted a change and it depends on another change. It will make sure that your dependent change should merge first then your change will merge. You can't merge both the things at the same place. But this is handled by jewel very nicely. And here we also treat all the configuration of software factory as a config. It has flexible workflow for reviewing the things through get it as a dashboard. It also provides etherpad, press screen, refo explorer and much more. And you can also visualize the whole events of the software factory through Q1 or dashboards. So currently the software factory looks like this system it. So this is the deployed instance. It's a free software. You can download it and you can install it at your system also. Coming ahead. So we have combined all the tools and we have made a RPM factory which is based on actually a software factory. And the RPM factory currently looks like this. So you can find these are the spec changes like Deloreo trying to build something but it got failed so it had created an empty dummy review. So here you can see that this is because of this commit got failed and here are the logs of the build failure. So the maintainers associated with that project got notified. He will come ahead and fix it. And this complete review.audioproject.org that is known as RPM factory is branded for audio. We host all the discrete as well as the patches branches there. All the upstream changes are acted by Deloreo so each and every commits got a package build and these are properly tested. All the discrete changes the review throw get it. So each of the code has to get one plus two and workflow plus one. Then it will get more just like upstream. Now currently new builds are done automatically using graffiti and currently we have lots of bots to update the versions of a package. We always try to promote the dependencies making sure that it works properly with the installers. So like suppose we have five new versions of a package comes up we will build in the CVS, tag it and we will try to promote it to throw a particular repo and make sure all the updated versions works fine. If it will not works we will try to downgrade it through the audio info. So this is the current workflow which is happening. So once a upstream commits gate merge, Deloreo will pick it, Deloreo will create a yum repo out of it which is served on trunk.audioproject.org. For the stable branching it will create the scratch build and the tag build then it will promote through CentOS build system. So all the packages which gates on the builds on the CentOS build system get tagged it and its mirror through different CentOS mirror servers then it get promoted through the current, consistent, currently past CI and they again fade it to different CI so that we make sure each and every yum repo is got tested. So currently we are serving all the commits on trunk.audioproject.org like here just a minute. So here we serve like you can see that like if we check the master one. So here all the packages built from which commit like if you check that commit got merged recently for that project. Again this once this gets promoted it is consumed by installer so that they can test it easily. So once we release screens the installer can make themselves ready for this release and they can properly release their softwares and we try to test each and every stacks in different fashions trying to deploy different scenario jobs. So whereas sometimes we are deploying open stack in the single node somewhere in multiple nodes with multiple configurations. Or we are also testing the gate jobs out of the gate by using Word. And it also leads to delivering a more robust cloud through RDO. This is the current stats for Okata. So we have made 919 commits 86 contributors and we try to release within 12 hours. And we have caught 230 build failures. This is the RDO stats from Pyke release and it got still aids because some of guys has taken PTOs. So there for Queens release we decided we need to take PTOs clearly and we have appointed some few release Wrangler for that. We have enabled more automations and we have improved our pipeline. This is the coming stats from the Queens. So we have 22 new contributors. We are back on tracks and currently we have implemented few things like you can automatically update the versions. We are currently testing each and every changes on two platforms Fedora and CentOS and we used to test RDO clouds on each milestone release. So people were completing they have no hardware. So we have introduced test cloud for that. So there you have in the etherpad you have to give email IDs and from there you will get the SSH key and start deploying that. There are few upcoming challenges we have to pass through that in the upcoming release that is moving to build 3, another is python 3 support and these are the companies uses RDO from the community itself. So how to become a part of RDO community? Simple is to contribute to RDO. If you contribute to RDO it also leads to contributing to open stack. Because if a delorean catches build failure sometimes happens that code has done some messy things. So you have to fix it there. There are some easy fixes also. So if you are new to open stack or RDO you can check that out. We have each business day weekly meetings there you can join and attain talk with people and you can take one of the challenges and help it to improve. We also need documentation writers who can help in improving RDO docs. You can also improve the installers also by taking a baby steps. You can join the RDO mailing list. That is it from my side. Any questions? Any questions? Yeah. If you are now on cycle. So now I am going to the various those on cycle. Sorry. You contribute to? Yeah. RDO have reviewed those I.O. Yeah. We have reviewed those. Yeah. The both sides run parallel or independent. Yeah. So review.rdoproject.o, review.openstack.org runs on open stack provided clouds. So like currently nine or 10 different companies had donated up clouds to them. So they are using the resources to run review.openstack.org and RDO is running on RDO cloud. And which is based on RDO. We are soon upgrading to quints and all the but we are using the same software there and here. So in software factory we have packaged all the stops in a single thing. So suppose if you want to deploy a CIC platform just like OpenStack infrastructure has. So you can use the software factory. It will configure the gate it for you. It will create a get server for you. It has the best mean everything. You can even integrate the story by old tiger or there is another task tracking system known as mumble. They have also some cool thing that is known as Repo Explorer through which you can visualize the contributions of all the people how many commits are maize, shiny heat maps and everything. They recently they have also introduced the code search through which you so all the gate repository associated with on that server CIC platform you can visualize that and search it. Imagine that I have a commit to the review of the structure. This is the commit moves to the area. No, so once a commit gets merged then the Deleurian will pick it and so Deleurian works on audio info. So audio info has all the information. So for a particular project what is the source code repository and what is the spec file repository. So Deleurian works on hooks. So once a new commit is merged it will trigger a hook which is listened by Deleurian then it will face that commit and it will face the spec file then it will create a turbo and after that you will generate the rpm packages out of it and after that it co-uses mock and chroot changes the directory. In that directory it will install all the packages required for building the packages and then it installs and try to upgrade it. If everything works fine then that review is successful or it will be plus one. Yeah, if it fails then it will create a rdo fpbfs. It indicates in review almost like for audio. Yeah, so with all the open stack projects which find it critical we have introduced rdo third party jobs. So by using rdo third party jobs either you can build a package or early if so if you are not sure like this my change is going to break the audio and other stuff so you can enable the third party jobs or the experimental jobs or with jul v3 you can demand generate your own and add your own jobs very easily by making the changes on jul v3. Yeah, yeah, any more questions? Thank you. Thank you.