 My name is Stephen Gordon from Red Hat. I'm here with my peers, Adrian Hobarn from Intel, Alan Kavanaugh from Ericsson. Today we'll be talking about OpenStack enhancements to support NFE use cases. Apparently, if you hang around at the end, you have a chance to win this Intel Nook. So if you didn't get a ticket, I believe there's a gentleman going around still handing them out at the back there. So get in quick. So today we'll be presenting on a couple of different things. So first of all, I'll be trying to present a bit of a map of all the ways to engage around NFE and OpenStack and the various bodies involved. Adrian will be talking a little bit about some extensions that are in Kilo for NFE, developed by both our own companies and the broader community as well. And finally, Alan will be painting a picture of what we need to do to continue to evolve the cloud to support NFE. So how did we get here? So in October 2012, the Etsy standards body formed the network function virtualization SIG, or ISG, sorry, with a view to transforming their networks. So transforming from the reliance on a physical hardware devices, which are very explicitly built for a special function, to virtualizing those appliances and running them on commodity hardware in virtualization. Later in September of 2014, the industry then moved to the next stage in terms of, okay, we've created this framework, this specification for what we want to achieve at a high level. How do we take that and build something real? So that's when the Open Platform for NFE or OPNFE was formed as a Linux Collaborative Project. So that's where they're trying to build effectively a reference platform initially composed of OpenStack and Open Daylight, but later including other variations as well for running NFE applications on top of. So in terms of decoding, just some simple terms for some in the room who may not be familiar with what I'm talking about here. So Etsy is the European Telecommunications Standards Institute. NFE is obviously network function virtualization, although as Alan will touch on later in this presentation, the virtualization is kind of more conceptual than specifically tied to the implementation. So things like bare metal containers and so on can still be relevant. It's more about having the agility versus that old model where things were built very specifically to the hardware to move things around and optimize the network. Finally, Etsy has the NFE ISG as I mentioned. So that's an industry specification group. They create what are called group specifications documenting things like the terminology and agreeing on things like the terminology, the use cases they're talking about and so on. So phase one of the Etsy NFE effort really focused on converging on a common set of network requirements, including where they existed, any existing standards they could identify, and endeavoring to stimulate innovation in an open ecosystem of vendors. So both the traditional vendors that they'd worked with up until this point and also vendors more engaged in some of the open source projects that we're going to talk about today. Phase two is really still getting underway at the moment. So endeavoring to grow an interoperable virtual network function ecosystem. And also to continue to more thoroughly specify reference points and requirements. So extensions of work done in phase one while achieving broader industry engagement. Again, both the traditional and newer vendors in this space. Finally, another aspect of phase two is clarifying. They're somewhat interrelated, but different intersections between network function virtualization and software defined networking technology. So some people will be familiar with potentially the Etsy NFE architecture. So this is kind of the diagram that appears everywhere. Primarily when we talk about OpenStack, we're looking in the right hand corner here, the virtual infrastructure manager. And also to some degree, the middle layer where we talk about the virtual compute, virtual storage and virtual network componentry. So for example, in the initial OP NFE reference platform based on this architecture, Open Daylight would be your virtual network, but that's not to say that's the only potential solution. As I mentioned, OP NFE is starting to try and build some of this into a real solution, focusing initially on the NFE infrastructure and the virtual infrastructure management. So again, this is where OpenStack comes in. They're focused on delivering consistency, performance and interoperability between the components in that reference platform while working with the upstream communities. So the idea being that they want to work with the upstream communities including OpenStack to build the functionality they need into the actual projects rather than carrying code themselves necessarily. So the initial OP NFE focus area is kind of in that area I just talked about. So the virtual infrastructure manager, the virtual compute, virtual storage and virtual network componentry and also the physical hardware that this is all running on. It's a big investment getting kickstarted in OP NFE around how we actually test this on real hardware as well. So also with OP NFE, there's a growing list of projects within that effort. So these are divided into requirements projects. So for example, the doctor project around fault management, integration and testing projects. So IPv6 enabled OP NFE where we talk quite a bit about the fact that most infrastructure components these days don't work with IPv6, they want to prove it works with IPv6 again using that hardware infrastructure and so on to do continuous integration based on that. There's also collaborative development projects for example around fast path service and quality metrics and also documentation projects. So given this reference platform, how do we actually deploy and use this and document that? Moving further into the OpenStack sphere. So within OpenStack, there's a telecommunications working group which was originally formed as the NFE sub team around the Atlanta Summit and created out of many of the same pressures that OP NFE came to be created out of. So helping telecom operators and network equipment providers articulate the requirements in a way that the open source community without necessarily the same level of exposure to telecoms and network function virtualization could understand in action. So endeavor to define and prioritize requirements, harmonize those inputs with now the help of other groups like the product working group in OpenStack into the OpenStack projects themselves including many of the members of these various groups intersect. So there are organizations that are members of both OP NFE and Etsy NFE. There are also plenty that are also members of the OpenStack community and have developers who work on some of these things as well. So the working group scope also goes into helping with blueprint and patch creation, submission and review and helping people in how they message around that and explain their problem space properly. The final thing I wanted to mention on that was also that one of the reasons this was created as an OpenStack sub team was to really help with moving the conversation closer to the OpenStack community. They were helping to advocate for the telecommunications and NFE use cases in an area where it had visibility to the wider community and not so linked into that space. Finally, of course, we have OpenStack itself which as you have no doubt seen this week is a very large community of technical contributors in a wide array of loosely governed projects and a growing array of loosely governed projects. NFE requirements typically don't fall neatly into one box. It says NOVA or Neutron or Heat. More often than not, the requirements have action that requires across groups of projects. So actioning those requires buy-in from these diverse groups and that's what we're trying to help with. Most OpenStack projects are moving to a kind of specification process for approving major changes. The ingredients of a good specification basically are a problem description including use cases which of course is what we're trying to help build. A concrete design proposal which has been created with or has clearly been created with giving thought to how OpenStack is actually architected and trying not to also damage other use cases if that makes sense and also someone to actually implement it, of course. So what we're trying to do in this kind of sphere of somewhat interrelated bodies is work together at that intersection point between all four where we can achieve success. So there is some overlap between the various groups in terms of mission, membership, scope and activities. So navigating can be a little bit tough. And in general, where we see this merging of the world, where we see or where we try and help the telecoms community and NFE community and the OpenStack community from the other end talk the same languages in that kind of area intersecting between OP NFE and the telco working group in OpenStack. I'm now going to hand over to Adrian who's going to talk a little about improvements we actually made in Kilo for NFE use cases. Thank you, Steve. So what I'm going to introduce are some of the changes that both our companies and many other contributors in the community have worked towards during Kilo. When we met last time at the Paris Summit, we talked about the type of things we wanted to get at it to help with the VNF deployment, adding the tuning knobs that we needed for that to be successful. So I'm going to show some of the advancements that have happened in that space. First up, I want to introduce the Neumann non-uniform memory architecture capability on modern servers. And this is typically when you've got a multi-socket system, the memory is usually closer to one socket or another. What that means is if you're accessing memory that's close to one of your sockets, that runs at its fastest, most efficient path. But if you happen to be accessing memory that's crossing that inter-socket connection, it's still quick, but it just isn't quite as quick as if it's local memory. So one of the changes that went into DUNO actually was the Neumann topology filter. And what this allows you to do is specify that you want to co-locate these various CPU workloads onto a single socket. And by doing that, that gets you access to that local memory which runs really quickly. During Kilo, a strict policy on that was added, so it's very direct. Previously, it was kind of the kernel doing the best effort to get you there. One of the extensions that we made during the Kilo process though was to add IO awareness to that. So for you folks that are working on high-performance network workloads, you really want the network device that's close to your processing to be on the same socket. So in this example, if for instance you knew for some physical connectivity purposes you wanted Nick B to be providing you with the traffic, then we needed to be able to specify that your workloads as part of the Neuma selection would pick the socket that had that network device closely associated with it. That allows you then to get access to that local memory. So you're running in a very, very efficient way, very little overhead in the system, not exercising that intrasocket communication. Next up, I just want to talk briefly about simultaneous multi-threading, also known as hyper-threading. You can put in the hardware to run multiple threads in parallel on the same execution unit. This is something we've implemented on platforms because it's a very efficient way of getting extra performance out of your computer sources. From an operating system perspective, when you turn on simultaneous multi-threading, you end up with twice the number of cores and you have actual execution units in the system. I'm showing a typical numbering scheme for how this works out in Linux. Since we look over on the far left on the execution unit, that shows up as CPU 0 and CPU 4. This is something you need to be aware of later on when we start looking at the new features we've added in OpenStack, which is around CPU pinning. Right now, there's a policy that allows us to pin particular virtual CPUs to their physical units. The first of four policies that's actually implemented is called the Prefer Policy. And with this policy, what we're able to do is specify that the guest OS vCPUs get associated with in effect sibling physical CPU processes on the host. The next policy we're going to implement and this is going to happen during Liberty is the Separate Policy. And with Separate, what it allows you to do is specify that your guest virtual CPUs are pinned onto physically different execution units. Now, you'll note on guest B here, we're not NUMA aware, so the CPU pinning with Separate will be to pin it onto separate units, but it doesn't include a NUMA awareness. For that, we need the previous feature talked about, the NUMA awareness. Isolate will be the second of three policies that are going to come in Liberty. And with Isolate, what you actually do is specify that a virtual CPU should only be mapped to a physical CPU that does not have a sibling already mapped. So that gets you great isolation on the host and makes sure that you can run in a very deterministic manner. And the third policy we want to introduce in Liberty is Avoid, which means don't deploy a workload on a processor that hasn't been configured with SMT. Some workloads just prefer not to have that kind of simultaneous multi-threading enabled. The next feature I want to introduce is around huge page table support. So in Kilo, I introduced that. We said we had some patches for it. This has been upstreamed, so we're going to go for Kilo. The reason we talk about huge page table support has got to do with part of the memory processing sub-system and the processor. There's a component called a translation look-aside buffer. And the TLBs are responsible in helping address translation. Within the TLB, you've got a cache that looks at your page translation table. And when you use huge pages, and by huge, I mean two megabytes or maybe one gigabyte in page size, what it actually allows you to do is the fixed number of entries you've got in that cache span far more of your memory address space. So when you are doing address translations like what happened quite frequently in virtual machine transactions, your likelihood of hitting an entry in the cache is improved and you get a far faster sort of address translation capability. So there are the capabilities we've landed in Kilo. And what I wanted to show now is how do you pull all these things together to deploy an NFV? Kind of like a basic NFV template. And the first thing we want to do before we start deploying NFV is set up the host in a way that's more suitable for these type of workloads. The first change I have here highlighted in yellow is on the grub config. In this example, I'm saying specify a huge page size of one gigabyte and I'm going to have eight of those pages. The next thing you typically want to do here is isolate the host CPUs. Now what I mean by that is the host scheduler is still running workloads and what's quite an effective thing to do is to say I want to isolate an execution unit on each of the sockets you've got. So my example would be CPU 0 and 4 on the first socket. So you'll see they don't show up in the isolate CPUs. I'm taking these cores away from the host kernel. Because we're doing some host level config, it's a good idea to create an aggregate so that your scheduler can be aware of what platforms have been properly configured for an NFV deployment. So in this example, I'm just creating an NFV aggregate. You should add whatever hosts you have in your environment configured to that. It's also good practice to then to create some kind of default so that you know your other hosts that haven't been configured in that way don't accidentally have a high priority NFV workload schedule to them. Moving then into the configuration side of things within Nova, what we want to set up here is part of the SRIOV capabilities that were added in Juno. So a typical example is you create a PCI alias. In this case, I'm showing an Intel NIC Code 9 Niantic which are the vendor ID and product ID. And we also specify in the pass-through the white list. So the white list allows you to specify what PCIe devices in your system get exposed up to the Nova database. In this example, I'm highlighting a physical PCIe connectivity. There are other options you've got. Sometimes you might want to go to that granularity because you don't want to allocate all of your NICs to your SRIOV type support. We need to update the Nova.conf so that we have the right number of schedulers specified. So in this example the aggregate XRESPEC filter the PCIe pass-through filter and the NUME are going to be needed. Within libvert, we want to make sure that libvert is configured to pass all of the CPU properties right through into the guest OS. Because things guests often want to do is let's say if you're running a VPN service or some description or you had some kind of crypto need and you want to get access to specialized instructions that can help with that. You need all of those CPU properties exposed up. The CPU pin set here is a way of telling libvert that these are the predictor CPUs that it's allowed to do some CPU pinning with. And they match the same ones that we've isolated from the host. So that gets you that robust isolation. Moving more into the network side, in this example I'm showing a VLAN configuration. So we want to set up the ML2 drivers with VLAN support you've got. We switch in here and the SROV NIC driver. Down at the bottom you'll notice the physical network ID that we're associating with this network and the amount of VLAN range that we want to specify. In the SROV config itself there's a related property for that PCIe device that you want to have allocated to your guest. We're calling out the vendor ID and product ID here. In this case we don't need an agent for this device, others do. So the agent required filter we set to fault are true in that case. And we're also saying that the physical device mapping again we're calling out that physical network identifier that we specified earlier. So once we get through that key network configuration we move into the flavor creation. So a typical NFE flavor. We want to specify that the CPU policy is dedicated so that we can pick up that preferred mechanism that's been implemented. As an example one of the CPU features I'm calling out here, AES. Anything that's exposed with CPU ID is available to be called out here as an extra spec. Noting the NIC that we defined earlier on and the number of virtual functions from that you want to allocate for this flavor. And in this case making sure we're targeting the same aggregate that we configured. Other extra specs you've got to do is then on the NUMA side, the number of NUMA nodes you want the CPU pinning onto that NUMA node, how many cores you're asking for, what kind of memory policy and strict has been implemented so you get definitely get that co-located memory. And then the number of the page size that you want to allocate it. So we've got a flavor setup we've got everything configured. Now we need to create the network. Again it's more of a tenant facing things so we'll create our network. We're going to associate it with that physical net in FV network identifier earlier. We'll create any subnets if you need it. And now on the bottom here I'm showing the port create command and this is where we're pulling in that VNIC binding type of direct which says the SROV device is passed all the way through to the guest. And once you've done all that you can go ahead and boot your device. And you should be running on a platform nicely configured to give you very efficient, very deterministic performance. So couple of other notable changes that have gone into Kilo. First one I want to mention is the ML2 mechanism driver for OVS with the data plane development kit acceleration support in there. So in this OVS mechanism driver this is an effective fork of the standard OVS MAC driver. It's something we chose to do because this is a user space V-switch and functionality is slightly different and certainly the type of functionality that's available is different than what's in kernel. So we thought it's better just to have a very clear separation on these. In the future we may be able to combine them back together again but that depends on how OVS with DBDK and Net Dev support enhances in the future. This implementation supports DVR in both VLAN and VXLAN mode. It's a path to really high performance capability so please try it out. Another extension that has gone into Kilo as well is this VLAN Trunking API extension. This hasn't really extended VLAN Trunking capability but it does give you a way of knowing if you've requested VLAN Trunking whether or not you're getting it. It's something companies here Cisco Erics and others are very interested in pushing. It's a key requirement for NFE deployments. The port security disabled extension is an interesting one. Very important for many network type applications where they're not the endpoint of the data. When the default security rules are kind of related to anti-spoofing they can stop traffic getting there if it's not meant to be the endpoint. We need to make sure we can disable that so we can route traffic into that network node and have it do its thing and effectively become a bump in the wire type device. Again I just want to clarify that there are a few people coming through from OP NFV, from at CNFV, from the telco working group and collaborators across all the various projects. Too many names to mention here but it's a fantastic move towards making OpenStack a platform where we can actually deploy NFE workloads. With that I'd like to invite Alan Kavana from Erics and up too. I'm going to talk about what I call the dirty little secrets inside the OpenStack community. So I'm going to give a talk around VNF Defliments, the different scenarios and work cases and what's important from an NFE perspective and thus basically as one of the VNF largest providers in the world actually. What we've regarded as basically what's the most important we actually want to see out of OpenStack. So let's talk a little bit about where we've actually evolved. A long long time ago we were actually shipping boxes. So we basically took a few weeks to a customer. He would place an order and it took several months. Took several months basically for the delivery basically to get there. Took a couple of weeks then basically for the customer to rack install that connected to his switches, connected to his routers. Couple more weeks basically to run test cases, you know configuration management solutions that he actually wanted to put on board. And it was a long period of time basically before they actually got like a network service node basically installed and ready and running in production. So in OpenStack with Intel, Red Hat and a couple of other community members is what we are really trying to do is actually speed the process. So if you can imagine what happened to the music industry over the last 20 years when people were actually going into, you know, HMV, Virgin Records Store. We originally started to boil like LP vinyls. Then we actually started to buy cassettes. Then we started to buy CDs. And then we couldn't buy them anymore. So when we want music we had to listen to that stream through Spotify or we actually want to get that basically on iTunes or basically music digital store online. And the reason why we actually want to do that is because of the speed, right? I'm not willing to wait basically enough to hop in my car, drive down to the store, make the purchase, come home, rip that install on my MP3 player. I actually want it right now. So this is one of the things that we are actually working towards in OpenStack. What we are really trying to do is basically decouple the hardware from the software which is really what NFE is really trying to do. So what we still need of course is we still need a box. The problem is that the software can say on multiple boxes on multiple platforms. And really what NFE is really trying to do is actually make sure that those software pieces can actually say on industry standard like x86 platforms. Okay? So where have we gone in the last 25 years? So what we used to ship, we used to ship a piece of software that piece of software was customized with a customized host operating system and it was then customized again for a specific hardware platform. So the traditional ones that you've actually heard of over the last 30 years is MIPS, PowerPC, ASIC, DSP. Now I'm not saying they're all going to go away. Okay? But the tradition is that most of the applications were optimized for those specific hardware platforms which then required changes for us to make in the host operating system like in the kernel modules. And then we actually made changes in the application to suit those different build environments. But what NFE is really trying to actually adopt is we're really trying to say well that's great, but the customized hardware build slows the delivery process for us basically to instantiate a VNF and get it up and running. So now what we're actually seeing is companies like what we are basically doing is we are shipping software and hardware, but the software can actually run on multiple x86 platforms. So pretty much all of the software, the VNFs that we actually sell today, with the exception of probably some transparent nodes, runs on x86 hardware. And what that allows us basically to do by putting that inside OpenStack it basically means that we can ship a VNF as a piece of software. We can use OpenStack to preview and configure that VNF reduced from months to minutes. Now that's the ideology. Now we're still a little bit away to get there. So there is a couple of features missing in several projects. And you've actually heard Adrian talk about one of them. And one of the important features basically that we kind of need in terms of a configuration option on the Neutron API is VLAN trunking. And the reason why we need VLAN trunking is because we want to deploy that VNF with VLAN trunking enabled. And today we don't actually have that. So there are like some of the gaps that Erics and other community members are actually trying to support and contribute back to the community. So we can actually speed up the delivery process for a VNF to be provisioned within minutes. Now another important thing that I actually wanted to talk about today is we also have a strong collaboration with Intel around what we call the rack scale architecture. So we're actually going to launch this box. We talked about it in Mobile World Congress. It's called HDS 8000. And it's Ericsson's disaggregated server fabric where we basically have a pluggable server for compute networking storage module. Now the reason why I'm actually talking about it here when I talk about VNF is because this will actually run for VNF workloads and non-VNF workloads. So you could actually run some enterprise service applications or web-based applications. What we're also shipping today is basically standard platform. There is no specific ASIC inside this box. We are using standard hardware commodity components. So let's talk a little bit about virtualization, right? So I think when NFE force kicked off, as Stephen actually mentioned, most of the focus was around hypervisors. And hypervisors are a great technology when you actually want to virtualize the actual physical infrastructure. It allows you basically to provide a lot more density on a specific physical server. And that's really, really good when you actually want to crank out a lot of density on that box. And it's also really good basically when you have VNFs that are built on different operating systems. So it allows us basically to run different guest OSs inside those VMs. So I could be running VNF from vendor A on that guest OS, VNF from vendor B on that guest OS all on the same server. Now there's a couple of small little tricks around that. And the tricks basically is that you can remember Intel's main two virtualization technologies are required for that. So Intel VTX and Intel VTD are actually required to actually so we can actually slice up and share the actual physical infrastructure components to be shared to all the VNFs inside separate VMs. So the latest boost that we've actually heard in the years is containers. So what's the difference? The difference between containers is basically that you have an application and typically that application is built with a common operating system with common kernel modules and it's great. Absolutely great. It's fantastic. But the problem is that when you actually have applications that require custom libraries and custom kernel builds containers it's probably not the bullet solution for everything. So what you'll probably end up seeing now, for example is we're still going to have hyperverses they're not going to go away. We'll have still have type 1 and type 2 hyperverses but what we could actually do is if we combine containers and VMs we actually get to do something really cool. We can actually make use of the VM isolation and the density that we get from VMs allowing us to run different operating systems. So in other words if I have an application that I want to run a container and it's built let's say with Ubuntu 14.04 release then I have another application that's not built on Ubuntu it's probably built on let's say REL. Then what I'm allowed to do by actually having them inside VMs I can run both container environments inside separate VMs on the same blade. So I get to take the benefits of containers and I get to take the benefits of VMs. Now the question a lot of people have been asking me over the last one or two years Alan, which VNF deployment am I going to pick? Am I going to pick hyperverses? Am I going to pick bare metal? Am I going to pick containers? This is really confusing for us. Which one is most appropriate? So let's take a look at that. Hyperverses really give us a lot of density allow us basically to run an application that's built for a specific kernel module and a customized operating system. So VMs are really really suitable for that. If you actually have and you require like true dedicated isolation, hyperverses are definitely the way to go. However if you have an application that is actually going to consume all of the CPU and memory on a specific blade like an actual let's say a low balancer or a router that's actually doing high years of playing packet processing, it would actually make sense to run that on bare metal. Or if you also have a brownfield application a web services application or a VNF as well that's also built with a common kernel and a common host operating system it actually makes sense basically to run that on containers. Now there's one thing that the bare metal actually gives us that the other two solutions don't actually give us. There will probably be some regulatory requirements that will actually require a specific application or function to actually be locked down on a dedicated box. And some of the questions I've actually got from people say well Alan why don't you just put it in a VM and put it on the box? Or why don't you put it basically in a container and put it on the box? You can do that but if you want to monetize your cost and reduce your Kpex why not basically remove the need for the hypervisor? If you have open stack infrastructure as a service API's supporting a provisioning and orchestration management system that can do provisioning of VMs, bare metal and containers, why don't I basically be able to choose all three? And that's really the answer. There's no bullet proof solution for one or the other. What you'll actually really see is that depending on the VNF that it's actually being built it will probably run better on bare metal it will probably run better basically more performance on the bare metal in other cases if it's not CPU or memory intensive and you actually want to crank out more density on the box but it's basically using a custom kernel VMs as a way to go if it's actually using a common host operating system, common kernel modules containers as the way to go or you can use a combination like it is also suggesting. You can actually run the containers inside the VM. So just to have a quick summary of what Adrian and also Stephen were talking about here, right? So I think the main point that actually Stephen from Red Hat was actually trying to emphasize is that we actually need to be transparent between all the communities open stack, Etsy NFE Open Daylight would actually be another one and the OP NFE program as well. So we need to have a cross collaboration so we can actually make sure that we get all the right specific configuration options supported in OpenStack to do the provisioning of all the different deployment modules. So you heard Adrian talking about some of the contributions that are being upstream so there is still a little bit of work to ongoing, let's say for example, to support true S or IOV support which I think is actually going to be important as well for containers and also for hypervisor modules and then the other one is like VLAN trunking and other like configuration options that's actually missing like for example like on the Neutron API. Now one other thing that I haven't really mentioned here is that today there is very little support for hardware acceleration in containers. So that's another thing that I think we'll probably start to address inside the community in terms of the orchestration and configuration options. So one example that's really important for VNF vendors is basically we actually ship a piece of software to see what we want to make sure is we want to make sure that we're selling that VNF that will actually perform according to the SLA. So what that really means is I want to make sure that the application is actually going to run with a deterministic and predetermined state at runtime. So for us it's actually important to talk about CPU core pinning. It's actually important to talk about new awareness and new mischeduling. So there is some work to be done for containers to support those specific configuration options to be automated in an open stack. And then the last one, the one I was actually talking about from Ericsson as well, is what we actually see is that you're not going to have a bulletproof solution where everything is going to be containers, or everything is going to be bare metal, or everything is going to be type runner type 2 hyper viruses. It makes sense basically in the ecosystem, like in open stack, if we can actually support all three, well then we're going to be able to support all the VNF type applications, workloads that we want to provision in an automated way. Okay? So with that, I'm actually going to take question and answers, and I'm going to invite my two colleagues, Adrian and Steven back up. And if you can actually use the microphone for any questions that we have. And don't be shy. So question for the gentleman from Intel or anyone really. You talked about sort of having VNF optimized nodes and computer optimized nodes, but it gets hard for a provider to dimension how many nodes to set aside with hardware acceleration and how many nodes to set aside without hardware acceleration. So do you think that most people will end up just using compute VMs all over the place and just not bother about hardware acceleration and to get the uniformity of inventory. So there's a couple answers to that really. In many ways I actually didn't talk too much about the hardware acceleration capabilities you have in how we've laid things out here. There are options that you don't have to go to the full steps of doing let's say whole space configuration that you need to specify aggregates. But if you choose not to take that type of a step then determinism and performance goes down. So that's more of a deployment of consideration you need to make. When it comes to actually things like hardware acceleration, like let's say crypto or compression acceleration, they are PCIe devices that you don't have to associate with let's say an aggregation per se. You can specify those through flavor extra specs or image property type requirements. And the availability of those dins depends on how many you've deployed in your environment. I also just want to tag onto that a little bit and explain some of the reasoning for recommending the aggregates in the situation. So in the example where we're using CPU pinning and large pages in particular if I pin one of my VCPUs on that box I pretty much have to pin all of them because I have to pin everything else away from the core I'm giving you that 1BM. The other thing is that the huge pages you lose memory over commit. So we can't over commit the huge pages because that kind of loses the performance that you wanted in the first place. So that's why in the situation where you're using both of those features you really do need to separate the workloads as well. Yeah, go ahead. Minor question about CPU pinning. In each of those cases I think you were trying to give a justification for why you do that. I didn't hear you say anything for the separate scheme. The separate, why you might want to do that is if your application is ideally trying not to do or sorry, not to have a lot of interdependence between workloads it's a reasonable choice. I think isolated would probably be my preferred one though. That's what I was making recommendations. It seemed like separate is de-optimizing. Essentially non-deterministic. Not necessarily because you combine separate with NUMA. What you may get in that case is you actually pull these processes apart in terms of execution units. The thing to remember is that two physical CPUs when you're running an SMT mode don't get you 2X the amount of computer resources. You want to separate what you're getting then is more compute capability versus two sibling CPUs. You may choose that just as a way of boosting the amount of actual computer resources you want for two virtual CPU units while not needing full isolation like the isolated policy gets. Sorry. Any more questions? Sure. Do you have any numbers available around when you pin, when you co-low what are the performance benefits for typical VNFs roughly? That's a mileage will vary type of question. It very much depends on the workload. For example within OpenStack what I want to say is we're offering the tuning capabilities that VNF developers need to run efficiently. It's up to the VNF developers to design their applications in an intelligent way that gets them the maximum benefit. Yeah. I think exactly. One of the things you'll see is depending on the VNF workload they're actually trying to provision. There might be some cases where you don't care about CPU core pinning. The application is non-deterministic. If the application is deterministic then it actually does make sense to take advantage of CPU core pinning and numerous awareness, numerous scheduling. It's really from a case-to-case basis that the standard to the order will actually differ. Any more questions? Just take the microphone. If you can just take the microphone. If you can just take the microphone. Sorry. Just to help answer last week in Network NFE World Congress Francisco Javier Salmon from Telefonica published some number and result from some what do you call NFE World Congress which is a combination of all the functionality we have been producing for NFE and there are some numbers I invite you it's all public documents I invite you to look at the number published. Cool. Thanks. Actually on that point there are a number of demos public ones that we could. It's just I don't want to give a number for all of you. Thank you. Hi. Having seen a lot of discussion on NFE the last few days the other word has been changing and I'm interested that there's been no discussion here and I'm curious as to whether you think the existing networking APIs are perfectly adequate in that regard I guess I'd be surprised if you said yes. I'll let you guys say hello. Okay. So we have work to do. I think we have some work to do basically around that so the focus basically of this talk was really to talk about the deployment options that you had. Service chaining yeah I think there is so there's a couple of different like methods that you can do to server chaining I think like VLAN it is like NPS label IDs is another I think some vendors depending on basically the network overlay solution that they're actually going to use for tying those specific let's call them like VNF options basically together some might be optimized basically for using VLAN some might be more optimized for using NPS like label IDs for tagging between one hop chain to the other. Yes I think there is a lot more work to be done to actually support those but it's going to take time but we're going to get there. I think specifically with the service chaining I think there's also a lot of people have different ideas about how it should work and that's where the use case discussion becomes very important. Another point too is I think this talk was very much teed up more towards the NFV side of things network connectivity at this like the rest again I mean like you know you could actually take that as far as networking for containers I think there's definitely some work to be done to have the same sort of provisioning and orchestration and management actually done that we have in hypervoices actually contained for example so I think there's still some photo workings ongoing that we actually need to add actually. Next question. Just to follow up on the service chaining question does it ever make sense to not have a chaining on the same server? It might as well is it a safe enough policy to always schedule them on the same server? Because we're talking to each other. Right so I think there's a it seems like the permutations and combinations of characterization is so huge that it would be a great service if somebody actually provided a set of guidelines on so yet again so I mean that's a really good question right so there's like a couple of different answers right so I think the obvious ones basically is that the workload, the traffic amount that you're actually trying to handle yes you could basically you know put all the b&f's like for that service chaining like on a single box if they're sufficient enough and they're going to be large enough which a lot of them actually will be I don't think you're going to see them like all sitting on the same node probably what we need to do is that we need to probably add some like additional work around the nova api for nova scheduler to actually you know have like an administrator to say like you know it's a small enough like service chaining I don't really care about anti affinity or can actually like provision them like on the same machine and then there will be some other cases where you know it's actually big enough and what I want to be able to do is I want to have some sort of affinity let's say within the same pod so they're all basically hitting the same switch for example right so there's definitely some work that we probably need to add for a nova scheduler to take care of those like specific scenarios but I don't think it's a lot of work it's basically like small minute changes that we actually need to contribute to actually doing these cases. I think we need to pause there because we have to give one of you a computer before we leave but we'll probably be outside if you want to ask further questions. So thanks for coming everybody and hope you enjoyed the talk. Yeah okay oh I think it's my ticket actually no I'm just kidding I'm just kidding six two one three two two seven six two one three two two seven three all right oh wait oh we've got a late taker okay let me six two one three two two seven yes it is actually we have a winner congratulations thank you very much thank you