 Okay, let's start. So we will not be late for lunch. So this is about using open stacks third party c i system, Not for a company to copy c i to a company, but to another Open source community. And i'm uli gliba from We have And we will present this together. So i will first explain a little bit the relationship between Open stack and opnv. And then i go about what we did Already with the third party c i in open stack in opnv. Fatih will explain the details about the setup. And then we do some experience sharing from yolanda and i Will summarize in the end. So opnv is a mid-stream project. And we use code from several open source Upstream projects, like you see in the middle bubble. There is open stack as the biggest one, but there are Also other ones. We use that and we put on Specific talco requirements and use cases to that. So we do changes and we do additions to that. And this means we need also to create our own c i part to Produce our software. So we have dependencies between All those. How do we do that? We have requirement projects that work on certain topics in opnv. And a big part of these requirement projects are Focusing on open stack. I marked that here in the Lower left part with the red border lines of the Requirements projects on the left side. As you know, in the reference architecture, open stack Provides this lower part of the mono column. And so there is more around that. This is what the other projects are about, but we start here From the open stack side. What is opnv doing with Open stack in that connection? Our requirement projects typically have a certain Workflow where we identify gaps or possible improvements For nfv usage to an upstream project. In our case open stack. Then we document these Requirements first for ourselves and then also with The upstream project we are working with. So in open stack case this means creating user stories Of blueprints and discuss that then with the community. Most important step is number six, implement those Patches. What we do today is we have to Test that these patches just in the upstream environment With other open stack components and when the Version is stable we downstream that to open opnv And only then we can test these patches in the opnv environment Before they are released in opnv. This leads to very long feedback cycles Because we have to wait between the implementation of the Patches before we can use them in opnv environment We have to downstream a stable version of open stack. That takes too much time so we need some improvement here. I have put this workflow also in a diagram. I will not go through it to save some time so we have Some more details of our third party CI setup. But you see how this feedback loop Is just much too late in the development cycle. That means we only get feedback after sometimes Five months and this is obviously too long. So what to do about it? Fatih can solve that. It's her solving that. So it's basically, as we mentioned, The feedback time is too long. Basically a developer Contributes something to open stack and when we get it to opnv The developer forgets about the code he contributed or she Contributes. So you want to make sure that we Enable the fast feedback cycles between different open source Communities and open stack is obviously the biggest Community we are working with and we decided to start with them And also they also have this third party CI mechanism That allows other communities or companies to hook into Their streams and get the patches. And in order to do that we obviously need to have Some infrastructure in opnv site. And this was the question we have been asked A couple of times by open stack infra people What are you using for CI? What your environment Looks like and these tools are the basic tools we are using In open stack and the only thing I want to highlight here Is that we don't use Zool because when you talk to Open stack infra people they assume we are using Zool But we don't have Zool and that is a kind of different Than open stack infra tool chain. So it's basically we have GitGrid like open stack And we have Jenkins. We don't have Zool And we use some open stack utilities like Jenkins JobBuilder to control and modify configure our jobs And we have our artifact repository. We have our Docker images stored in Docker hub And we have bug tracking and documentation type of stuff And we also collect our tests in different database And show some statistics visualization of those things Basically our infrastructure utilizes all these tools To provide CI CD support to community And the second and perhaps more important part Of opnfe infrastructure is the hardware infrastructure In this case it is Farros Labs. We have Farros project in opnfe which deals with Federated lab infrastructure of opnfe And as you can see on the map this is an outdated map Which shows like eight countries on the map Which means eight different labs all around the world And all those labs are connected to Jenkins And we are able to run CI CD in different labs In different parts of the world And on top of the resources we are using for CI CD We also have additional resources for development purpose So developers can code their features Or write their test cases and so on But the important thing here is like Our hardware infrastructure is pretty good We generally don't have issues with lack of hardware At the moment which will help us to get This third-party CI up and running And then using these tools and the hardware We then set up our third-party CI And when we say third-party CI for OpenStack In opnfe context is that the first part The project patch focus this maybe You might call it like traditional third-party CI Because when you look at the other third-party CI system Set up against OpenStack They are generally interested in certain projects Like Neutron, okay I am interested in Neutron When a patch comes there I run something in my company Laboratory and give feedback to Neutron or NOAA And we will do the same thing We will hook into different Get It projects In OpenStack and we will run patch set verification Or post-merge type of stuff And give feedback to developers when the things happen Not like months later But this part will have limited testing Because this is not the most important thing In our third-party CI setup The important thing in our third-party CI setup Is this end-to-end platform-focused CI And which is exactly what is our problem Now as we mentioned we have many different upstream projects OpenStack, Open Daylight, KVM and so on And in OpenStack case it takes months to bring OpenStack Into opnf environment and we generally work with Stable versions which means like months And what we want to do is that we want to build this platform In opnf on a daily basis Whenever something happens upstream or on a timer Periodically we want to install OpenStack from master In virtual environments or on bare metal And get it up and running and then put our opnf Specific components or other upstream components Like stn controllers, OpenMusevich or other features And as I mentioned we want to deploy from master We want to cut the feedback time short And we want to do more extensive testing Using our functional testing framework Or platform benchmarking frameworks And give feedback from kind of production environment To both opnf developers and OpenStack developers And this is basic project patch-focused Third-part CI, as a developer I contribute something To OpenStack repo and that contribution triggers something In opnfv CI, in this case opnf Jenkins And we build something, if something needs to be built We do a virtual deployment and we run smoke testing For example, and then we give feedback To that specific project so developer can see Oh, I broke something in opnfv I should fix this and a new patch comes And then developer gets, yeah this is good It can go ahead, on top of regular OpenStack feedback cycle We give similar feedback from opnfv side And the extra thing we will do in this project Patch-focused CI, third-part CI is that We also want to build things from OpenStack And store them on our artifact repo And consume them in our projects internally in opnfv What this means is that whenever something gets submitted Into projects we are looking after We will start running build jobs, virtual deployment Test jobs and so on to see if that thing is good enough To pull into opnfv and if it is good enough We will promote that to our artifact repository A tarbol, a Debian package, an RPM And we will store it on our artifact repository And the rest of opnfv CI if we have Some other projects are interested in those artifacts We will pull them down from opnfv artifact repository And do bare metal deployment and testing this time So essentially we will cut more time here as well An example to this could be fuel plugins Again fuel plugins it takes a couple of months To get them into new opnv release With this we want to do that in a day Whenever someone sends a new change to fuel plugins One of them we will be able to pull them back down To opnfv field and this part is basically The end-to-end platform for qci and this is where Things are a bit tricky and here in this case We basically want all these opnfv or open stack Developers continue their day to day work as it is now But we want to do things in opnv environment And give feedback to them and it's basically Provision some bare metal resources And install opnstack on them with the latest contributions That happened during yesterday for example And tell the people what happened And we decide to do things in same as opnstack infra In this case it is bifrost and puppet infra cloud So if a certain or if opnstack developers Want to contribute in opnv and vice versa They shouldn't see any difference between this type of stuff When they move between different communities This will make it easier for developers to contribute To both communities as well And this is where Yolanda will explain us Tell us what she has been doing last two, three months She put lots of efforts and she made things happen So I'm going to talk a bit about the puppet infra cloud project Puppet infra cloud is just an open stack lightweight installer That is based on puppet models And the main intention is to deploy just an open stack With the testing purposes It doesn't mean to go on production So it's composed of several projects We are using bifrost That's for orchestrating all the bare metal servers We also have puppet models, puppet infra cloud It is the proper installer that we use an open stack infra And we are using now opnv We use this image builder as well To build all the deployment images And we also rely on Glyn That is just a replacement for cloud init To install all the settings on the server So the basic requirements to deploy an infra cloud We just need a VM to put all the bifrost controller here And then we will need just two bare metal servers at the moment You have one for the controller and one for the compute In terms of requirements The VM needs to have access to the IP mine network To take power control of the servers And then we need to have access as well to the pixiboot network To be able to manage it And we need just like 8 gigabytes of VM So it's not much requirement And how does it work? I mean the first step We need to just deploy the bifrost controller That bifrost is just a set of playbooks That deploy an ironic server A standalone ironic Then so we can take care and manage all the bare metal servers So we start with this VM Install bifrost there We need to configure all the settings for bifrost Because everything is different depending on your lab So you need to configure the network settings VLANs, IP ranges You also have the freedom to choose If you want to deploy Ubuntu, CentOS Depending on your needs as well So you configure all the settings before Applying the pop-in manifest for bifrost And you also need to define the inventory of your server So you have different bare metal servers So you typically define the IP for the server The user and the password The MAC address that you are going to be using For PixiBoot With that you have a bifrost, an ironic controller Just ready After that you need to do like 2 steps First you need what we call enroll What the enroll process is Is that it takes all the data from the inventory You pass it to bifrost And it just adds all the nodes into the database To be able to manage it And then you have the deploy process That is properly doing the setup of the servers So what this deploy process does basically Is that it just takes control of the power of the servers Using IPMI It restarts the servers And then it starts with the PixiBoot process In the PixiBoot process it typically just copies The deployment image That you want to install on the servers And you copy a config drive as well With all the network that you need His name, SSH keys So that's copied into the servers When it is done it is rebooted again And next time the servers are just booting from hard drive And you have it totally deployed What it means to have the server totally ready I mean we have the operating system that we want With the configuration we want All the network is totally configured with your settings SSH keys are ready so you can connect to the servers And you have the hostname set as well So you don't have to configure anything manually It is ready just to apply all the manifest So when the deploy process is just finished You can SSH to the controller and to the compute And then apply puppet manifest again To just run the components you need there And you totally have like a cloud working at this moment It is just thought for functional testing So the main advantages of this is quite fast You can have it ready on 20 minutes Pretty good compared to other installers That are just focused on production So that's more or less what we are achieving So we want to just have this ready for functional testing And to be repeating again and again I mean pulling from master periodically And do this functional testing all the time To be sure that everything is working in our environment And it's ready to be consumed for the developers So at the moment we are at these steps That we have this functional testing up and running We are adding third-party CI to several projects For example this is done with by-flows We want to do it with puppeting for a cloud as well This functional testing is also available On Opinus V Jenkins right now The main challenges that I found when trying to reuse this puppeting for a cloud It was totally focused for infra And we are trying to use it on Opinus V now So that was the main pain Everything was still for infra So it was only available for Ubuntu The manifest didn't even run on CentOS or SUSE So we had to modify it, make some patches to make it work Now there is proper support on these three platforms Also the network settings were totally hardcoded on the manifest If you move this to another lab this wouldn't work So what we made is to make it configurable Everything relies on puppets higher now So when you want to deploy another environment You just change the higher settings You apply again and everything works again Also I found several races that is typical from puppets that you hit races When you apply puppets the first time it works It doesn't work the second time, the first time until it works So it's just a blocker for an automated test So I had to correct several bugs of this type to make it work once And also we needed another features Because we moved to another lab Our lab settings were very different in terms of nix, bilans And there were features not present really on the building For example that is the cloud unit replacement of Revifros So we implemented that And we contributed to these projects to make it better And to be able to reuse on other platforms So that's quite good Also another difficulty that we are having Is that not much people or not much projects in OpenStack are aware On how can they do third party CI on their projects It is not really well documented It was really hard to know how to set up it And how to make it look properly This is something that needs to be improved Okay, so about the next step As I was talking, this was totally focused on Ubuntu We need to finish the support for CentOS and SUSE That is nearly done Currently we are using stable metaca Because it's what Puppet InfraCloud is using But our intention is to move to master It shouldn't be really difficult But I think we may hit some problems with the Puppet manifest If some computer has changed or a new data format comes We need to test it as well We need to introduce a chain Because we are just running with a computer and a controller Our idea is to just provide a chain So we can run with three controllers and two computers So that's the job that needs to be done as well In terms of networking, Puppet InfraCloud was very simple It was using just Linux bridge and providing networks To be able to use with OpenFB We need to add OpenB switch support as well And also we need to introduce gradually some other improvements For OpenFB, so there is a job that needs to be done here So if anyone is interested in contributing It will be very helpful for us And as well we need to enable third-party CL for more projects Every project that is really involved with this effort Should have the third-party CI to be sure that it's working for us And it's not breaking on production So that's almost less my experience I wanted to share and I will pass now to Aline Thanks, Yolanda Now this last bullet I want to give some more words about it Like when we say about third-party CI for more projects We are talking in OpenStack context here Enable third-party CI for other plugins and so on But if we look at OpenFB again We have lots of different upstream communities OpenStack, OpenDaylight, Onos, KVM, OVS and so on And this actually says that we will enable third-party CI For other communities as well And we already started talking with OpenDaylight People to see how we can bring OpenDaylight into OpenFB faster as well Because again with OpenDaylight we have some slowness When it comes to feedback and some manual steps are involved there And they have great ideas and we want to make this happen for OpenDaylight So when we get OpenStack we can put latest OpenDaylight on top of it And give even more proper and faster feedback So we will continue working with other communities to enable third-party CI with them as well And with this third-party CI setup The development workflow changes drastically And as it says there, the feedback time can go down to one day essentially And the difference between this diagram and the previous diagram is that green fast feedback line So basically when OpenStackGerit gets a new patch set or master moves on OpenStackGerit We can directly pull that down into OpenFB CI And we can test it properly and give feedback to developers Who contribute those patches to OpenStackGerit And this can go on and on and on and the fixes can be provided faster The issues can be detected faster and then we can essentially put this platform together The latest version and working version of the platform together on a daily basis And then this will help us to move things both in OpenFB and other communities And in this case it's OpenStackGerit And additional benefits of this is basically, again, patched by OpenStackGerit Developers can be tested early in OpenFB environment As I mentioned, our hardware infrastructure is pretty good And we are kind of donating test environment to OpenStackGerit If you look at it from an OpenStackGerit perspective We are not using their resources We are not asking them to give us any resources But we are saying, okay, we will run testing of OpenStackGerit in our environment And this will give feedback to OpenStackGerit developers as well Because we will not filter the changes between OpenFB and OpenStackGerit When we deploy from master, master will come And whoever contributes to that master will have their stuff tested in OpenFB environment And this will help OpenStackGerit to improve the quality as well And the features of OpenStackGerit can be tested And the usability of them improved in NFE deployments If you check the sessions of this summit, there are lots of NFE-related talks And this will benefit to NFE-related work in OpenStackGerit And again, the OpenStackGerit and OpenFB, the patches coming to OpenStackGerit And impacts or dependencies to OpenFB can be tested early And can be found early and corrected, fixed early And the other thing, when OpenFB developers come up with a new feature They go to OpenStackGerit, create a blueprint And once the blueprint is approved, they go and code it there But it might take time for OpenStackGerit to take and accept this change Because they might not have enough data to approve that change With this third-party CI, we are hoping that we will give this type of feedback From a real bare mental amount and then help reviewers in OpenStackGerit To hopefully approve this type of changes faster Because then they can check the OPMVCI and see, oh, this worked there And we should approve this And the backward compatibility is the other thing And as I mentioned in the previous slide, the third-party CI principle Can be applied and will be applied to different communities We are starting with OpenStackGerit now, we will start with OpenDaylight And we hopefully will start other open-source communities If they are willing to contribute to this work by their ideas We don't ask them to come and fix our problems for us We want them to tell us if we are doing things in the right way In the proper way and adding value, not the hassle Yes, to summarize what that means So OpenStack third-party CI is a good means for us To help us to solve these issues So OpenStack third-party CI is a tool which is well suited For this synchronization between two open-source projects Not only for the synchronization between companies What is it meant for from OpenStack side on the first side This concept using in the open-source communities It's not only a benefit for OpenFE We see a big benefit for OpenStack as well So it's a win-win situation between both communities And we also saw in the experience sharing That third-party CI is easy to establish in such an environment And it will take immediate effect We're very fast in seeing these benefits in this setup And as an outlook in the future to transfer that To not only OpenStack but all these other open-source communities We see there much more benefit for both OpenFE And all the other open-source communities Where we can establish these methods for So this is a summary there Questions? We are a little bit early so we have time Enough? Five, six minutes for questions Do you like this? As developers I suppose you are doing stuff in OpenFE OpenStack and this will hopefully relieve some problems By running things in this type of bare metal environment And giving you faster feedback And have to remember what you coded months ago Or chase other people to look at your code you contributed months ago And obviously we are doing this for both communities And it is not limited to OpenFE developers OpenStack developers can come and ask for stuff If they think we can do this in a better way We are open to all the ideas and do things in a better way Yeah? Yeah, so this is something that needs to be done The first step is to add the open-v-switch And the proper network support in the Puppet Infra-Cloud And normally all these projects have Puppet modules already there Because they are used for another installer So I think the idea that it can be modular I mean we just go with the basic Cloud deployment and then we could just consume these modules Depending on the needs to add proper networking as well Not directly them, they want just a basic cloud installation But the thing that they have from them is that as long as we make it modular So maybe from our side The Puppet modules are there because every time that a new feature comes In OpenStack there is a Puppet module Because for example, Triple-O is using that The problem with Triple-O is that it is very heavy It is not ideal for testing But we can consume the same things as well It's a term of figuring how to do it in a modular way Without affecting the other projects using it Yeah, the other thing is we are working directly upstream For Bifrost and Puppet Infra-Cloud And we try to push as much as possible to upstream If they are not complex, complicating stuff for OpenStack Infra And if they see, okay, this is really not something We should have in OpenStack Infra We might keep them local to OpenFA until they see the need And then we can put them back to OpenStack Infra I will basically consume this type of Puppet manifest We have those manifests for different OpenFP projects Or plugins as well So we can reuse all those things from either OpenStack Or different in-store projects we have And really, we shouldn't reinvent the stuff We should reuse as much as possible And contribute to extinct stuff rather than coming up with something from scratch Yeah, I think that how it works is that we have the relinch project That is where we have all the manifests really there For our installation And then we are consuming the Puppet Infra-Cloud models It doesn't mean that we just need to consume this We can consume Puppet Infra-Cloud And we also can consume our models if we need As long as we can glue together in a clear way In Relinch, we have really, really open and free specific stuff It's like hardware specific stuff and the deployment type of stuff So everything is upstream And as Johanna mentioned while she is talking It's basically we hopefully enable more communities or companies To reuse OpenStack Infra tools internally Because we are trying to make those utilities consumable By anyone running in any lab, not just OpenStack Infra I think with Tripoloa, yeah, it provides all these features But it's very heavy and yeah Maybe they are doing, but they are... When it comes to different in-stores, the priorities are different For this work, we provide this as a service And our only priority is developer's benefit If they need this thing faster, we help them to get faster But when you look at the other in-stores They might have extra features on top of OpenStack And those features might take priority from pulling down master And installing from master Maybe that's why they are chasing stable versions In Tripoloa, in upstream, in Infra They are just doing from master The problem is these towers that go on top of it And open they like, the Apex or... It's basically pushing fuel upstream, not OpenStack upstream Apex is pushing Tripoloa upstream, not the OpenStack upstream And so on We are directly pushing OpenStack Do we have any other question, Arik? Sorry Any questions, Mauro? Okay, thank you for coming and enjoy your lunch Thank you