 My name is Farthi and I work for UX and I am involved in open source, especially with the open source project called open platform for network functions virtualization which deals with integration of open source components. And Yolanda. I work at Red Hat, I am an NFE partner engineer, so we use health partners, customers, telcos in the NFE area with open stack integration, taking all the NFE features and integrating to the products, especially on open stack. So, since the beginning of this track, you have been listening different talks that are mainly talking about different STNN and NFE technologies such as VPP, DPDK, ONUP and so on. And all these components are developed, integrated and tested by different open source communities and they are all doing great job and they all try hard to make sure these components work fine. But in open source, there are not many communities that are dealing with integrating of these components, integration of these components and there was a gap there which needed to be solved. And especially when it comes to network functions virtualization and if you look at traditional networks, how they are built, it was obvious that the traditional networks are not possible to scale or not possible to build using open technologies and it wasn't so easy to innovate in that space in traditional networks. And based on these concerns, it's NFE came up with this NFE architecture in 2012, about five years ago, to see how the networks can be developed and built up using these new technologies such as cloud virtualization and lately containers and cloud native. And as we all know, standards do a great job but sometimes, standard work takes a bit longer than some might want it to be. And based on these concerns or thinking, this open source project, open platform for network functions virtualization in short of NFE was established around the end of 2014. So it's about three and a half years ago to do integration work using these different technologies in parallel to standards work to see if the idea, the architecture works and at the same time if the components developed by different open source communities work well together to build this platform up. So basically open platform for network functions virtualization project aims to create a reference platform for NFE using open source components, open source technologies. And this is kind of very challenging because when you look at NFE stack and NFE architecture, you see many different things that like NFEI itself or compute. If you go down in your diameter nose and you have on top of it, you have virtual networks, virtual storage and virtual computer sources that and then your NFEI, you have Mano, then you put VNFs on top of it. So it's really challenging work to bring up this platform using open source components. And since the day one of NFE has been trying to do this by bringing different technologies together such as OpenStack or OVS, DBTK and lately Kubernetes. And while doing this, all NFE tried to be on the server side, meaning that we started with using stable released versions of all these different technologies components. Once it comes to OpenStack, we wait for OpenStack to do their official release. After six months, we take that release into open NFE and try to integrate it with other technologies such as SDN controllers. And it's similar to SDN controllers as well. We have OpenDaylight on us and OpenContrail there. And we take stable official release of SDN controllers as well and we integrate released version of OpenStack with released version of OpenDaylight, for example. And we do this for all the different components we get from open source communities. In SDN space, we have, as I mentioned, OpenDaylight, OpenContrail on us. We have virtual switching like OVS and VPP there. We have nano components such as OpenButton or lately ONUP. And we take all these components and try to put together and hopefully improve all quality of all these components for anyone who wants to use these components for different purposes. Now, since we are using stable versions of these components, it is a bit slow to provide these feedback to upstream communities. For example, if you look at our latest release in open NFE, we did that around October. Let's see if, yeah. We did the release around October and we used Okata as part of our release. By that time, OpenStack already released Pyke. Pyke was released around September and we released open NFE in October with Okata, so we were at least six months late then upstream. And since the point of open NFE is integrating these components and finding what works and what doesn't from type of point of view, when you find problems with these components and how they work together and go back to those communities, it's pretty late already. It's six plus months. And all the communities we work with, they are pretty helpful and friendly and so on, but if you put yourself in their situation, if someone comes to you after half a year, and asks you, okay, we have this problem. How can we fix this? You might have already moved on to another item in your backlog and you might not have time and interesting fixing that thing. Or maybe that thing might have gone out from your stack, so you might not have any possibility to fix that either. So this speed we have been doing things in our Pyke NFE is kind of became, it became challenging for us because we are not talking about single community open stack. We are talking about more than 10 communities maybe. And if you put all these delays, months of delays together, it's like years of delays when it comes to feedback time we are getting out of our integration efforts. And based on these findings, we started thinking about how we can make things faster and become more relevant to communities we are working with. And then we started talking with Open Daylight community originally, and we said, guys, we are getting officially this from you, but can we get more recent versions of your technology or component with faster than how we are doing? And they started working on this to get with us to enable continuous delivery in their community so we can do continuous integration in our Pyke NFE. And as one of our friends Daniel Ferrell says, like, our Pyke NFE can go as fast as slows it off its upstream. So if someone is very slow, we can't be faster than that community because we depend on them. And by using this thinking and using Open Daylight as our first community to go after this continuous delivery idea, we brought up these continuous delivery pipelines from Open Daylight, which basically means that whenever they build something in their CI using their auto release process from their master branch, it should be possible to get that latest built and tested artifact into open environment. So we can basically integrate latest and greatest version of Open Daylight into our NFE platform. And that looked very positive, and we started to know how we can do the same thing for the other communities we are working with. And as I mentioned, many times we are working with Open Staggers as well, and Open Staggers is pretty important in NFE. And we started with Open Stack community saying, OK, guys, we want to do what we have been doing with Open Daylight community. And we want to consume the latest and greatest versions of Open Stack from master branch. And based on those conversations, maybe we can go here this, let me, yeah, based on those conversations, we came to a conclusion that, OK, we can find an easy way to get the Open Stack version from Trunk. And we started working on this about a year ago, and we now have a way to deploy Open Stack from Trunk, which basically means that whatever Open Stack community has been working during this lease cycle, in this case, it's Queens, we get that version at least in worst case in a week later, which basically costs the feedback time from six months to a week, which is a great improvement. And again, if you look at all these different components in NFE Stack, you can see how big difference this could make. Instead of waiting for months for different communities to send fixes to us, we can get those fixes in a week's time and hopefully later on, on a daily basis. And you see all these different things on this slide, and it's really a complicated thing to do this work, integrate these components. And obviously, we are not doing this manually. We have a CI machinery for Open Fee, and which basically goes through different compositions of the platform, because some of the components can't be collocated together, and some of the components are more people are looking after them, or more people are interested in them. Based on those things, we create different combinations which we call scenarios. And first thing we do is basically we provision our machines with three different links, one of three different links distributions open to send us, or open to say, and then we put either Open Stack or Kubernetes on them. And then we put different STN controls instead of Neutron, for example, we put Open Daylight or Onos. And then we construct different features like service function chaining or BGP VPN. And finally, once we construct or compose this combination, we run testing against this integrated stack, which is a big difference comparing to Open Stack or Open Daylight or other communities, because they mainly deal with their own components in very small scale. And in our case, we put all these components and try to find what works and what doesn't. And as I said, this is impossible to do manually, and we have a CI machinery for Open Fee, which uses distributed bare metal labs all around the world. And currently, it's like, I think, 15 labs that are providing resources, and we have more than 200 bare metal nodes supporting two architectures, X86 and ARM. And as part of our latest release, we had more than 50 scenarios, more than 50 different combinations of components, and we deployed Open Stack more than 8,000 times in six months time, because we continue to try deploying things and try to see if they work. And we started Open Daylight when it comes to XCI, consuming things from master branches, and then we have been working with Open Stack, which we can deploy both Open Stack and Open Daylight from trunk, and Fido has been handled in a similar way, but in different scale in Open Fee, because that only dealt with Fido and Open Stack. In our case, we are dealing with all the components we have or aiming to deal with, and hopefully we will include on up into this as well. And these are CI pipelines. These are upstream project CI pipelines. They all have their patch set verification, daily master jobs and so on, and we continue to track the master branches to find working versions, and once we find working versions on them, we pin those versions in Open Fee repos and then run, create the scenarios using those pinned versions and run testing against them on bare metal nodes. Okay, so now I'm going to talk a bit about the tooling that we are using for that. So we are having different components here. We have Zool, Bifrost, Open Stack Ansible and Cubespray. So Zool is what we are using for project gating. We are using that for project gating. It means that all the changes that are created in a project, they go through these two components, and it will pass a set of pipelines to verify that the jobs are working and that they are in a good state to be merged. So once Zool is doing all the jobs, they pass through the gate pipeline that is the one that is used for merging, passes a lot of checks there, and after that it goes and merges into the code and you can trigger another set of jobs to the package building or publishing or whatever. And then we are using Bifrost that is used for hardware provisioning. So if you have a set of bare metal servers or VMs, we are going to use Bifrost. So Bifrost is just a set of Ansible playbooks to deploy a standalone Ironic. So Ironic is a component of Open Stack that is in charge of provisioning virtual machines or bare metal servers especially. So it is in protein and Ansible and it supports three major distros like it is Ubuntu, CentOS and OpenSource. To use Bifrost, it is very easy to use. It is just installing a set of Ansible playbooks and you basically have four steps. So first one, you need to create an inventory that is collecting the data from your servers like the IP address user and password you are going to use to connect. You put that on a file. After that, you run the second step that is installing Bifrost. So it is a set of playbooks that will install Bifrost on your server. And once you have it running, you will do the enroll state that it means it will collect all the information from your inventory. It will run on your machines and it will collect the info of the machines like hardware, type of disk, all the information you need to introspect the data and it will pass to a database. Once you have it stored on the database, it is the final step to deploy. So what the deploy means, it means that it will take all the bare metal servers. It will install the operating system that you want. You can install CentOS Ubuntu. Also, you can customize the image that you are going to install. So you can add custom packages or extra configs, extra projects. So you will have all the servers in the same state, the desired state after that. It will also add you a basic network config. So all the bare metal servers will be accessible in the network the way you choose. And it will also copy SSH keys. You will have SSH keys that you decide and you could access the servers with that after it. It is very solid. I mean, it has been used in all OpenStack distros, so all OpenStack that they are using bare metal and mostly using Ionic. And we are running third-party CI test on it. It means that all the changes that go on Bifros, that it is an OpenStack project, are gated also against our OpenFV testing. So we validate that any changes in Bifros is breaking the OpenFV CI setup. So it's really good for that. We can trust in Bifros. Okay, next step is OpenStack Ansible. It is a tooling for installing OpenStack. Basically, there are several toolings, like triple-O, but we choose this one because it's easy to customize. It's based on Ansible as well and it's very, very flexible. You can choose the way you want to deploy OpenStack there. You can use it in containers, using LXC, or you can deploy directly on bare metal. It is very, very easy to integrate and very flexible. You can decide the type of roles you want to include in your deployment, if you want to include Nova, Keystone, HA or not, or even if, especially for OpenFV, you can write your own roles. So you can decide. I need to put an extra layer on top of OpenStack. It also has several ways of deploying. So you can deploy on developer mode and production with HA, with NFS. You can complicate it as much as you want. And then we support also major distros, like Subuntu, CentOS and OpenSource as well. Okay, so final one. Use KubeSprite. So we started also to work a bit on Kubernetes. So we are starting to use KubeSprite. It is a set of Ansible tool playbooks again to install a Kubernetes cluster that is created inside Kubernetes community. It can be created on multiple platforms. You can deploy on, for example, you can deploy bare metal virtual machines like with Ansible or there is also a possibility to do it on OpenStack on Amazon. So it's quite flexible for that. It's also very composable. So you can decide, for example, the kind of network plugin you want to bring. It has different options there. It's supporting CoreOS, Ubuntu, Debian, CentOS. So you have flexibility there. And it's solid because it has, like, end-to-end integration testing, like, perform it on different platforms as well, so it's reliable. And, well, we are starting to do it as a proof of concept. So we have mostly OpenStack down there, but we're starting to get Kubernetes integrated into OpenSV as well. Okay, so I think most of you have... Let's give five minutes for questions. Before we move to questions, I want to highlight, now we talked about NFV, STN, and tooling, and CI, and stuff here. The main thing we are trying to achieve is basically to increase the awareness in Open Source communities when it comes to integrating all these components together and creating a platform. Because some communities are pretty good when it comes to continuous delivery of stuff, but some communities are starting, just starting now. And there's not much we have been trying to do. Different communities started understanding what we are trying to do. And I think first time in CI CD space, many different communities coming together to have a workshop during OMS to talk about these things, to see how we can establish continuous delivery of work in Open Source. So it is more than just what we talk about here. It is about a cultural social change. Now we can take one or two questions maybe. So would you be interested in CI integration with OMS? We have been trying to reach out to OMS, and I suppose we have done that right now. We can start talking now. OMS was part of Open Source like one or two release ago, and then they disappeared and we want them to come back. So we are interested. One last question? Just say it and I'll repeat the question. So are all the pipelines fully automated, or is there some human interaction needed for failures? Now you need the mic. Yeah, all our deployment and testing are automated, but since we are running quite complicated testing, sometimes it requires human intervention to see what went wrong and reproduce issue and find the problem. Okay, no more time. Thank you very much, Fatih and Yolanda. Our next presentation is from Emma Foley, who is going to present the Barometer Project, which is an OP NFE project for collection of platform metrics and telemetry. We have been having some AV issues, Emma.