 OK, thanks for coming in on this late hour on the day. And we have our final presentation by Tom and Fong again. Go ahead. Thank you. So, hello. Today, we're just going to, this afternoon, we're going to talk about the downstream integration and packaging. My focus is mostly on RPM packaging, but there's analogy in DEB packages. And initially, VPPs tended to emphasize DEB packaging. And there were RPM packages available. But I've been working, and others have been working, all along, trying to move things to the point where we can get these RPM packages available in downstream repos with a focus on CentOS. And for now, and maybe real in the future, perhaps, possibly. But certainly, CentOS, NFVsig. Most likely. And we're just going to talk about the packages and how they're arranged, and then fairly briefly. And then we'll get into the orchestration, which Fong is going to talk about. And again, my name's Tom Herbert. I'm from Red Hat, and Fong Pond's from Red Hat, too. And I don't have a Red Hat on right now, but I'm sorry. So again, we're going to start with talking a little bit about the packaging. Again, both VPP and Debian packaging is available. And recently, Susie's joined the VPP project. And so there's a third RPM option. But I'm hoping to get to talk to some Susie people and to make sure they're happy about the approach that's being done right now in terms of building for Susie and building packages for Susie and make sure that is OK. So then following that, we'll talk about the orchestration of VPP, and then Fong will take over. So let's talk for a moment about 17.04 release packages. And we'll talk about what's available right now. And then after that, I want to talk about the issue with the DPDK packages and how DPDK is bound in with VPP and what the future looks like in terms of what packages might be available in CentOS. So right now, what you do is, since we don't have this in a CentOS repo, we have our own repos in the Nexus repo that are supplied to us from Linux Foundation that we use in VPP. And if you create this file in your repos.d database in your CentOS system, and then you can do yum install VPP, and it will just work. So that's the way things work now, and it'll install from the Nexus repo, but it's not part of the downstream CentOS distribution yet. This is what's in the RPMs. Again, what I got here is there's the Honeycomb has been split into two projects, and one is the generic Honeycomb, and the other is Honeycomb-specific to VPP. So it's a bit of a work in progress, but this comes from the new Honeycomb project. But the other VPP RPMs, of course, as you might expect, the VPP RPM itself, lib, the develop with the header files, debug info, of course, and the plug-ins. And we'll talk about the plug-ins a little bit more in a minute. And in addition to that, the other projects in FIDO generate RPMs as well. So for example, Honeycomb, I just mentioned, and you just heard about NSSFC from Danny, and in addition, the API Honeypons for each of the language bindings, one for Java, Lua, and Python. So ultimately, we also formed a DPDK-RPM project. There is a companion-deb DPDK project as well in FIDO. And their idea in the DEB project was to get DEB packaging for DPDK that I believe wasn't available upstream. But we started a companion-RPM DPDK project. We do have DPDK, I mean, RPM packaging available upstream from DPDK. And as a matter of fact, we're using that right now to build the DPDK plug-in. But in the future, though, this project might be a place to work on common dependencies and issues of DPDK configuration to make sure that DPDK is properly configured for VPP. Part of the issue is that downstream users have other data planes they use, which also use VPP. And for those things, they're configured in a slightly different way. So that's where things become a challenge. So the official, for example, OVS DPDK official packaging, which I think is OVS 2.6 or DPDK 1611, maybe, is part of the CloudSig, which is the sort of CentOS distros download of OpenStack. And since we're all stackers here, we're probably interested in that. And in addition, more recent packages are being built in CBS, which is a CentOS build system. But we don't, as yet, build VPP or DPDK, specifically, as newer than this officially. There's another thing called the NFE SIG and the SIGs. And CentOS are kind of like special interest groups in their places to put stuff, which isn't necessarily part of the real official CentOS distribution. So the NFE SIG is a particular example where we can put things that are a release ahead of what's officially available in the downstream version of OSP, the downstream version of Red Hat OSP, for example. And that might be VPP. So that's probably where we're going to go. And it should be available there once we solve a couple of issues that we have upstream with building a source RPM and making a source RPM or a tar ball distro that we can build from. It's independent of Git. Currently, the VPP uses a plug-in system. And that kind of makes things easier in some ways. And the packages are built in kind of an ephemeral directory. Instead of being built in build data like they used to be. And there was a bit of a change, but actually improves things. It's a new build system that was pushed with some recent patches, largely by Damian. So again, DPDK is really nothing but a set of libraries and utilities. There isn't really any binary DPDK to run. There's no agent associated with DPDK in the utilities or bind utilities, various other things, test PMD, and some things which aren't specifically needed to configure VPP. But otherwise, it's just the library. But VPP builds DPDK as a plug-in. And it's created at build time. In other words, the DPDK configuration, VPP is compiled with, at the same time, VPP is compiled. We cannot consume an externally created dependent, or we can't create a dependency on an external previously created DPDK RPM. But in the future, maybe we might want to do that. Have a single DPDK RPM that will supply both OVS DPDK and VPP and maybe some other consumers. What's in the RPM? So with the new build system, the VPP plugins, as you can see, DPDK, it's in boldface here, is actually in the VPP plugins RPM. We used to, in earlier versions of VPP, there was a DPDK module build. Now, Debian does things slightly different here because they use DKMS, and they still build a kernel module for IGBUIO, I believe, that we don't use that in RPM packaging. And the other things here are the various libraries associated with the different plug-ins associated with VPP with functionality, like SNAT, memory interface, lib plugin, the flow per packet, and the various components that are pretty much familiar with people that are familiar with VPP. So we have VPP control and VPP itself, the two binaries, and the service stuff, the starting and stopping VPP is packaged with its RPM. And then separately, we have the APIs. There's a separate RPM for each of the API sets for Python, Lua, and Java. And then there's another RPM that's created by the NSH project. And in addition to that, the VAL has the header files, the lib's, VPP lib's are as expected all the libraries you need if you were going to write an additional plug-in. You could actually do it without, in theory, in the source code, and so on. And so that's all I'm going to talk about again. The only point I'm trying to make is this is what we have now. We don't have any major impediments to make VPP available downstream in CentOS, NFV, SIG, as long as we're not expecting to use a common DPDK. So it'll be the same RPM as you see right here. And hopefully, we'll get those upstream by the 1707 release, or downstream by the 1707 release. And now I'm going to turn it over to Feng, who will talk about orchestration. All right. I'm going to go over basically how we deploy vital projects now that we have everything packaged. We're going to talk about two different ways, right? So one is in a standalone environment and also in OpenStack. So first is Puppet FDIO, which is a project that lives in FDIO. It will deploy vital projects like VPP and Honeycomb in Linux. It is written in Puppet, as its name suggests. Why Puppet? Because it gives us an important way to manage the configuration file and the service. So it will bind interfaces to VPP. It will configure other VPP options. And it makes sure that the VPP service or Honeycomb service, as whatever the case may be, is in the desired state, for example, running. So that gives you an easy way to run this over and over again and achieve the same sort of state all the time. So this tool can run in standalone mode, or it can be part of an OpenStack installer as well. Right now, we do use it in OPPO installer. And that's part of the OPNFE Apex project. We're going to talk about next. So a brief overview of OPNFE. So OPNFE is an open platform for NFV. It is an open source way to do system integration, or for NFV specifically. So the goal is to develop NFV features using upstream projects and test them and integrate them in a continuous CI CD environment. So it gives you a way to develop new features for NFV quickly. OK, so this is the platform overview picture. So you can see we can pick different upstream projects like OpenStack, Open Daylight, other SDN controllers, various data plane choices. We integrate them. We test them. And then when we identify features that are missing, we develop them in the community. And we run through all of those in the CI environment. OK, let's see. So OPNFE composes what we call scenarios. So those are the, you can call them test cases, or you can call them use cases that we compose, that we put together and develop and test. So the FIDO-related scenarios are called OS, which is OpenStack, SDN, or no SDN. So no SDN, in no SDN case for FDIO, it will be the networking VPP ML2 driver. And of course, we can also use Open Daylight. And in the Open Daylight case, we can make Open Daylight do L2 and L3. Or we can use the Q-Router or the L3, Neutron L3 agent, in which case we call them Open Daylight L2. So those are all different scenarios. And they are separately developed and tested. OK, so Apex itself is based on OpenStack triple O project. It basically automates the entire installation process for all those projects. So in upstream, for example, in OpenStack or OpenDaylight or FDIO, we tend to focus on just those projects feature itself. But the integration itself and the installation is basically handled by Apex. In this case, we use one command using one step. We deploy to either a virtual VM environment or a bare metal environment. And we do use both. We use the virtual environment, particularly for development. And this gives you a very quick way to stand up a cluster of nodes where you can really add new features very quickly. And the bare metal, of course, is very useful for if you want to do scale testing, if you want to do performance testing, then that also gives you a way to do that. And we do have that running in our CI environment that runs every 24 hours or so. The latest release of Apex is based on OpenStack Newton release. And the reason we use a stable release is just because it's more stable. We want to focus on the features we're developing. And in this case, for example, it would be FIDO features. For example, we're integrating ODL, controlling VPP or Honeycomb VPP. So we want to focus on that. And so we do freeze the OpenStack release. And so we minimize the damage that OpenStack itself causes. We do use the latest version, Open Daylight, because that's the only version that we can get VPP support and Honeycomb support. And the VPP version we use is 17.04, which, again, is fairly recent. The end result of Apex deployment is a OpenStack environment that is running and is ready for running any kind of test, like a function test or a performance test. So in our CI environment, we do run various tests after deployment or installation itself is done. So it's not just deploying it, but we do also run tests on them. OK, so this is a slide that I stole from a FDS project, which shows the architecture of the FDIO ODL deployment. In this case, ODL is doing L2 forwarding only. So that's east-west traffic. The Neutron L3 agent is handling the north-south traffic. So I wanted to just point out what are the pieces that Apex is deploying. So in this case, Apex deploys the OpenStack services, the Open Daylight, Honeycomb. It installs all of those components, installs VPP as well. It binds the tenant network interface to VPP itself. It sets up the OVS bridge and connects it to the external interface. The piece that it does not do is the VXLand tunnels, the VMs, those are all handled in OpenStack. Those are all done when you create a networking Neutron, when you create a VM in Nova. So Apex really deploys all of those services and configure them on all those separate nodes. And those nodes can be either a virtual machine or it could be a bare metal node. In the case of HA deployments, in which case there would be multiple controllers. In our case, it would be three controllers. It does set up things like clustering for SED in the case of networking VPP. It configures Open Daylight HA. And it makes sure, it does make sure that all the services are started correctly and configured correctly. So when we have a successful deployment, you do know that everything has been started up correctly. OK, so how do you do this? This is installing and deploying Apex is fairly easy. We deliver RPM packages. We also deliver a ISO image that you can load onto a compute node. And it's a single command to deploy. You supply three files, a network setting, a inventory, and a deploy setting file. The network settings specify your network environment. So we support using a single network. That is, if you have very simple network, one single network for basically everything, then you can do that. You can also separate it out into as many as you want. API network, storage network, whatever else. You can have all of them separated or you can have all of them in one. And the inventory file is to store things like IPMI information so we can pixie boot stuff. And this is only needed for bare metal environment. So you issue one command, and the result is basically running open stack environment. So at the end of this command, you can do things like create a network or create a VM and do whatever else that you need to do. The more detailed instruction on how those files are structured can be find in the link. I want to go to each one of those files and talk a little bit about them. The network setting file, again, it defines things like my external network range, my gateway, my DNS server, my NTP server, things of that nature. We do ship a few of those with Apex for both IPv4 and IPv6. So you can use them, especially in a virtual environment. It's very useful. You don't have to actually create one or invent one. You can use the stock one. And that's actually true for all three of those files. The inventory file, so that's what it looks like. It's a simple YAML where the crucial information here is really just the IP address of the IPMI and the credentials for them. So it's basically how we get to those bare mental nodes. In the virtual environment, this is automatically generated because we get to define those VMs. So you don't need that. The CPU, memory, disk information are, in fact, optional. They can be filled in during the introspection phase for triple installation. So they can be there, but they don't actually need to be. The crucial information is just how we get to it. The deploy setting, so this is the file that really defines what is a scenario. So this file tells the installer what features to enable and disable. And most features are basically a flag in there that's true or false. So you can very flexibly create your own scenario or use one of the many that we ship. So for FDIO, this is what a FDL L2 looks like. We can say HA is false. We can also say true. We say SNNController is open daylight. We don't want to use SNNController to do L3. The version of the ODL we want to use and other features that you can flexibly enable or disable. So you can compose your own scenario file or you can use one of the one that we ship and you know it's tested. So in our CI system, we do test all of the ones we ship with and they are verified to work. So OK. So that's what's there today. What we're going to do next in our next release, which should be about four or five months from now, we're going to upgrade to the next OpenStack release. So Taka, one thing we are talking about adding support for is OpenStack's master branch support so that you can, if you would like to, use the head of master for OpenStack every time you build and deploy so that you can actually develop OpenStack itself using this environment. This is a bit more adventurous, depending on your use case. But it could be useful for some. And then the other feature that would be really useful for VPP is we're going to add DVR support for VPP so that L3 is done on every compute node rather than centrally. So today it's done in one node. So that's what we're going to add. And I believe that's it. If anybody have any question. I'd like to add one comment. The first thing in our earlier presentation that we did today when Fung talked about these different options in ODL, those are the same options that we went over in the earlier presentation too. So that described how they work internally. And here is described how to orchestrate them. And the second point I wanted to make is that the Puppet Fido project is where also there's no theoretical reason why we can't use Ansible. It's just that initially it was Puffet. And when resources become available, we as the whole community will hopefully be able to do Ansible as well for orchestration. Any questions? So triple O is the upstream for director. And all of our development, once we're done with integration into Apex, we actually go upstream to triple O. So all the features that we're adding, for example, VPP, we are already adding to triple O. So for example, VPP is already in triple O. Now, when it makes to OSP, I think that's more of a product question. And I think that's more roadmap sort of things. We're upstream here. We're not really product people. But upstream, it is all there. So we try not to maintain too many patches from upstream. Thank you. Have you had any thoughts about offering containers as a first class delivery mechanism? Absolutely, absolutely. So for us, we don't make preferences to one another. If there are, so OpenFV as a community basically just listen to operators' requirement. And if there are desire to support container or anything else, you simply say, we want that. That's something we desire. And you add a scenario and we add it in there. So Apex Day, for example, supports 20 scenarios, 20 different scenarios. So those are the FDIO subset we're listing here. But really, we don't make judgment. We don't say we don't want container or we don't want something else. So if there is a use case, if it is something that is desirable to have, we can certainly add that. And triple O itself. And we also will evolve with triple O too. So as triple O evolve into using container for installation, we will naturally do that too. But of course, there could be other use cases rather than the installation process that does triple O. That does container too. Yes. All right. I think that's it. Well, thank you all very much. Thank you.