 OK, welcome everybody. So we're here today to talk a little bit about OpenStack as a framework for NFE. So this is almost a continuation of a presentation that we actually gave in Atlanta. There's going to be three presenters. There is myself here first up. My name is Alan Kavna. I'm working in Ericsson on OpenStack and cloud technologies in general. On the right, basically, we have my other colleague, Adrian Hoban, from Intel. My other colleague, ecosystem partner, is Stephen Gordon from Red Hat. So what has really happened in the last few years with NFE? So I think we all really agree and understand that NFE transformation has already started. So we have started where we originally had vendors, what was actually baking an application for a specific hardware configuration build. And typically the reason why we've done that is because we wanted to have deterministic performance SLA for that specific application, for that specific box, for that specific runtime. And then NFE came along and basically enforced and basically had a change in the market. And the change it basically brought about was that applications, the piece of software that was basically the system, like if you had a BRAS, let's say for example, the subscriber management, the line card, the control card, was basically just a piece of software. And the idea was it could actually run on any specific physical infrastructure. And what we were really then doing is we were treating that application as a specific VM instance to actually run on any specific physical infrastructure using virtualization layers like KVM. And we were using OpenStack then, using the OpenStack API service framework, to take that application and actually provision that in a cloud environment and runtime system. And the next part that we really want to go to is we want to go a little bit above and a little bit above to blow the stack. And they're really, really key points. So the main point, if you look at the rest of the slides that we're going to present here, we're going to talk about policy, security, governance, SLA. And there are some of the challenges that we're faced to endorse and actually rubber slammed down with an OpenStack. So if you then think about what's going to happen in the next couple of years, is that we're going to take an application. It's going to be provisioned through a platform as a service. And the reason why it's going to be done through a PaaS is basically because of policy, security, and governance of the application on cloud infrastructure systems like OpenStack. So what does it really mean then to provision and configure a VNF on a cloud infrastructure system? So really what it means is you have a couple of other anomalies, basically configuration foils, service catalogs that all need to get tied together so we can actually provision that VNF. And another misconception is a lot of people think a VNF is basically a single service, single instance, single application. A VNF is basically a collection of applications, a collection of containers, a collection of VMs. And then typically what you do is you have configuration foils like OVF, for an example, Tosca as another example, to actually define how that VNF should actually be provisioned and deployed in the cloud infrastructure. Using, for example, OpenStack APIs. And typically what you then have in the service catalog is you have a thing called the network service descriptor. And the network service descriptor is typically used to define how those specific VMs or applications and containers should be tied together to make up the VNF as one whole. But there's a couple of problems there, actually, in OpenStack. So OpenStack doesn't actually expose a lot of specific configuration options and knob tuning that a VNF would actually take for granted. So let's take two or three examples as classics, VLAN trunking. So that is something we have really been trying to push into Neutron for over one year. And we have actually failed inside the community to actually do that. And the reason why it's important for telco vendors and other service avoiders alike is basically to make sure we don't have to go and tell OpenStack too much details of what we actually want to provision for that VNF at runtime. We just want to say, please give me a trunk port on the V-switch. You shouldn't need to care about what's actually coming down on the actual Ethernet service. You shouldn't care about the VLAN tags that are actually being provisioned by the VM, by the application. So that's one specific service that we're really missing. Another one is quality as a service. So Sean Collins from Comcast, for example, has actually been starting quality as a service. But we really need to get that into the core API. We can't just be considering all these as extensions continuously. That really needs to get brought into the Neutron core API. Another one that's also important for a lot of telco vendors and telco applications is security. So there's different levels of security. So let's talk a little bit about basically what's under the hood. So you've got security then, what we call host attestation. So what host attestation really is that typically when you bought a VNF from an incumbent vendor like Ericsson, we basically shipped you a box in appliance. Now what we're shipping you is you're shipping you an image. So the time to market for you to provision that application is within seconds, minutes, as opposed to weeks and months. But there's a couple of small little nuances that we need to be aware of. And the real thing that we need to be aware of is security. We have to guarantee that when the VNF is provisioned that a specific host, specific firmware has not been tampered with. So we need to do some small little checks under the hood, firmware validation for, let's say, a PCI card, host attestation for a specific host operating system, small little things that we actually don't have today in OpenStack that are very, very important for handling real-time subscriber traffic. Another important part is scheduling. So most of the people here that have played around with OpenStack will know that there's many, many different schedulers. Nova has its own scheduler, filter and weight-based. Neutron has its own scheduler, Cinder has its own scheduler. But there's a problem with that. The problem with that is really what we're saying is that when we place a job for a VNF, it's based on the compute first. But what I really want to be able to do is I want to be able to say, hey, I want to be able to tell my VNF based on the SLA I want to provide is that I want you to provision this VNF while having these constraints, not just on the compute, based on the network but also based on the storage. So what we need to be able to do is we need to be able to do a unified scheduler. And there is one such scheduler that's actually now available on Stackforge. It's not part of the core OpenStack project. There is a project called Gantt that's actually being driven by Donald Dugger from Intel, doing some really, really good work there. A little bit immature, but going along the right lines. And there's another one available on Stackforge called the Solver scheduler, driven by a number of vendors basically here like ourselves. And the Solver scheduler is constraint based. And what that really means is that you can actually do what's called fine-grained placement. So what a VNF can actually say is, don't just install me or provision me as a VM. I'm going to be able to give you an SLA based on, if you can provision my VNF on a specific blade, somewhere within your data center, it has a specific PCI card with these sort of drivers, with these sort of capabilities. So we need to expose those small little attributes nuances basically into OpenStack that we don't actually have today. Another example is local storage, right? So there's many, many different ways like storage has actually been provisioned, like in a homogeneous and a heterogeneous data center. So the classic ones is basically central repository, directly attached, local storage. And there are some VNFs that may have a requirement, for example, for doing local storage. So the obvious one would basically be transparent caching. You basically want to consume and basically manipulate and actually manage the objects as close as you can basically to the actual cache, okay? So there's a lot of additional features as well, like CPU pinning is another important one, right? So a lot of people that are aware of Enterprise applications, they are typically not persistent, small VM, small virtual CPU, small virtual RAM, small disk IO, small NIC. On the VNF side, it's the opposite. They are big workloads, big jobs, a big VM, big memory allocation, and also we want to have specific CPUs pinned for that specific VNF. And the reason why we want to have that is because we want to have deterministic characteristics at runtime, because I want to be able to guarantee my SLA, okay? Another one is link type, right? So you can imagine that you're going to have many, many different network interface cards from many, many different PCI vendors, Intel, Melanox, KVM, I don't know, even more than what we can actually consider today. And what we want to be able to do in a data center is say, hey, I don't really care of the PCI vendor that, you know, this subscriber or this data center manager and provider is actually using, but what I want to be able to do is say, if you're going to use a specific PCI card, I want to make sure that my VNF is instantiated on a PCI card that, let's say, has dual 20 gig links, for example. I don't have those capabilities today in OpenStack, and they're the small little attributes that we actually need to contribute back for endorsing SLAs. So what this all boils down to is what we call fine-grained placement, and the reason why you want to do fine-grained placement is that when you're certifying a VNF, so let's say, for example, as this OP NFE certification program, that's actually being driven by like all the vendors and industry VNF service, like vendors as well, like ourselves and Ericsson, what we want to be able to do is we want to be able to certify our VNFs on multiple hardware configuration builds and components from third-party vendors. That's really what we're trying to do. But in order to do that then, we also have to build some smarts into OpenStack. We also basically have to be able to say that I will guarantee my VNF at runtime with this SLA as long as it's provisioned with these specific constraints, hence the unified scheduler. So NFE is really simple, right? So why is OpenStack struggling really to accept it? Is it really culture? Is it mindset difference? Or is it basically that vendors and service providers like, haven't really got together and say, okay, I know OpenStack is very enterprise, single endpoint IP focused, but you know what, NFE really is simple. So what do we really have to do then to make it simple? So let's look at a child's play approach. First thing we gotta do, obvious one is API exposure. We actually have to enrich the Northbound Interface APIs. So the VNF can actually specify through the API and say, please provision me with a specific device with this PCI card from this vendor, with this driver, with this firmware as an example. And then I need to be able to do more stuff on the networking side. So it could be that I actually wanna specify that please install me on a V-switch that has a capability for doing VLAN trunking as an example. Next one is the cloud abstraction requirements, right? So what we're not saying is we're not saying basically we're taking everything from the physical and virtual layer and exposing that look up through OpenStack. We still wanna keep the level of abstraction. So what I'm saying is basically the VNF doesn't have to be told or have to tell OpenStack, please place me in exactly this pod, on exactly this rack, in exactly the CPU. That we don't care about. But what we actually need to have is some sort of attributes that we're actually gonna expose on the Northbound Interface, so we can actually make sure that the SLA is actually met. So unified scheduling is a really, really important topic. So we need to get away from the fact of just having a novice scheduler and actually have a unified scheduler so we can do job workload placement that actually considers compute networking storage altogether before the job is actually placed at runtime. And what this all boils down to is basically SLA-driven placement. So you can imagine that when you came a couple of years ago when you bought a traditional appliance and a node in the service from a vendor is that the SLA was based on that specific application and that specific environment configuration build operating system. We just need to apply that exactly in the cloud. So what we need to do is have some small little attributes, small little API nuances that we actually need to expose. So vendors like us and our service providers in the room, we can actually make sure that those SLAs can actually still be met in a cloud environment using OpenStack. And now I'm gonna hand you over to Adrian. So I wanna pick up on some of the points that Alan made and start to look at some of the extensions that have been made and are going to be made in OpenStack. Really to help start to drive that and enhance platform awareness up through the OpenStack infrastructure to expose at the API level the type of configuration and control capabilities that we really need in order to deliver on the performance and SLA characteristics required. Before I get into that piece, I just wanna introduce the data plan development kit quickly. So dbdk, it's an open source software library. It's dbdk.org, free BSD license. Really, it's a collection of software libraries and drivers that help you to really optimize your packet processing performance capability. It builds from abstraction layers at the bottom right the way through to pole mode driver infrastructure to help you extract packets from your network device and get them into user space really quickly. You've got a whole set of libraries across the top that you can make use of in a traditional or typical data plan application. Items such as buffer management, flow classification. And after you've classified your clothes, you're gonna have rings and butters that you need to manage them in. So dbdk really helps with giving you a framework that you can go on create and deploy high performance data processing apps on. In addition to the actual software itself there's a whole set of design paradigms that generally come to be for extensively used when you start to leverage a dbdk style application. It gets to things that I'll point it out about, say CPU pinning. We're gonna talk about new more awareness, a huge page table support. All of these things have fairly fundamental impact to the type of throughput capability you can get out of your platform. And what we really need from an open stack environment is the configuration capability in order to do that. We don't really need to make open stack dbdk enabled. That's not the focus here. The focus is about getting that API in place so that we can make use of it. I'll come back to dbdk in a moment but I wanted to introduce another technology that's very relevant to high performance workloads. So what I show on the right, it's a picture of a particular network device. We're looking here at SRIOV single root IO virtualization. Now with SRIOV which is a part of the PCIe spec it's really offering you the ability to partition up whatever PCIe device you've got. So you can create, now it's called multiple virtual functions and allocate individual or maybe even multiple virtual functions all the way through a virtual machine. You typically need to or you want to make use of things like DMA access to help improve the performance so address translation is usually carried out in the chip set to help accelerate that. At the top then you've got in the VM you've got a virtual functions and you need to have a driver that has been specifically designed for the card for that virtual function. There's usually some communication mechanism between the virtual function and its physical function pair. And that's necessary because the physical function and the virtual function are very similar but different. The main reason they're different is the physical function has complete control of that device. You would not want to be in an environment where you're sharing virtual functions and you're allowing multiple virtual machines to get access to a hardware device and then to have one of them to bring the link state down. So you differentiate between what you're going to allow a physical function to do versus a virtual function. So in an open stack context when we want to be able to allocate an SRIOV capability through to a virtual machine what we really have to be able to do then is expose up the key items that the virtual machine are going to use to be able to find the right virtual function driver to be able to instantiate that driver, pick it up through the PCIe subsystem and then be able to access the VM. Once that virtual function driver has been enabled then it communicates with the PF and you've got full access. Now in OpenStack we started working on PCIe enablement for pretty much for non-network devices as the main focus in the Havana release. There were some applications where you could use it for Nix but it was without OpenStack or Neutron being aware of what you were doing. There's been some great work happened since then and all the way through the Juneau cycle Credit Cisco, Melanox, Red Hat and others have made great strides here. And what you've got now is what I'm showing here is a workflow for how you can actually use an SRIOV device with Neutron. It starts with the whole platform configuration so you need to allocate the virtual function. What that means is you need to tell your physical function driver configured in a way that allows it to expose up the VFIDs. In Nova you have to create, or there's a white list and in there you can specify the particular device that you are prepared to have permissions and access sent through to the Nova scheduler. You need to tell Neutron that you're going to run with SRIOV so to do that you have to configure the SRIOV mechanism driver. Once the mechanism driver is configured you then have to tell it of all of the vendor and device ID pairs that are in your PCIe subsystem which one of these do you want to expose up? So in a way it's a, think of it as like admin control so that a data center provider doesn't have to expose everything they have on their platform and they can choose which vendor ID and product ID device pairs are allowed into the resource management system. Once you get to that point, you're set up, you're ready to go now and it's really about which network do you want to allocate a virtual function to. So far that we used the Neutron Port Create command and there are multiple types of virtual interface that you can set up. The example I'm showing here is direct which means we're going to allocate a virtual function all the way through to a VM. You get a port ID back from that command and when you want to start up your virtual machine you need to provide that port ID to Nova as part of the Nova Boot command. Nova will check with Neutron just to make sure that the ID it's been given is accurate and from there you start it up and now you have a network that Neutron is aware of that has been accelerated with SRIOV. Okay so moving on from that then if we talk about some of the capabilities that have been added to the scheduler. So Alam mentioned some extensions we need and I'm very grateful to say a lot of good extensions are going into the filter scheduler today. It needs to go past that but first I want to address what's happening right now. So from the Ice House release there was actually changes made in the Nova Compute Libvert driver and in Libvert itself and these changes allowed you to really get all of the CPU related features as expressed through instruction sets exposed up to the resource tracker. That allows you to create extra specs and when those extra specs are sorry through the extra specs you can actually identify various CPU features that you want to make use of in your platform. The example I'm showing here is the advanced encryption standard new instruction. That's one that you'd use for sort of crypto intensive workloads. But following on from the SRIOV example there's also a PCIe filter. So if you had a workload that required access to something like a security instruction like AES and I and maybe possibly a particular network device you can start looking at these as ways to go and enable it. There are extensions needed. I think we're on a path here but I'm really happy to see we're making progress. The kind of so what of all of this is that it starts to open up some new opportunities. You can actually start to think about creating some premium flavors with this. You could if you're a cloud customer benefit from that then through sort of handscape abilities. You're going to get access to features in the platform that you don't traditionally do in a cloud environment. The cloud provided themselves. They could actually benefit from some potential revenue opportunities here. You're looking at providing this capability to create extra specs and define new flavors so now you can expose more of the value that's in the infrastructure. And back to the Talco use case what we're focused on today. You can really make good use of these type of capabilities to make sure your feature matching between the needs of an application and what the infrastructure has to provide you with. And that can help with performance particularly from a throughput and a determinism perspective. Now continuing on the team of the sort of enhanced platform awareness that we want to enable. We need to look at one of the properties of a modern multi-socket system, multi-processor system and that's the NUMA related configuration. So with NUMA it's non-uniform memory architecture and what that means is that the location of your memory, the performance and latency you get to memory is very dependent on the location of it. So if for example you have applications and I've got one on socket zero and another on socket one here and they're accessing local memory that's working at its most efficient. But if these same applications happen to be accessing remote memory and traversing items like the quick path, interconnect buses and other items in the processor, you're not working at your optimal performance level. So if we were to start to put the configuration and the tuning knobs capabilities in OpenStack so that we can start to co-locate these workloads then we can start to realize the value that's in the platform that today is frankly going underutilized. So if you look at one of the changes that has gone into Juno now and again it's another item that we're starting on a process there's more to do, the new metapology filter allows you to specify at an extra spec level, at a flavor level or through image properties what type of mapping you want between your application and your platform resources. So I've already mentioned the benefits from a memory perspective but having these applications co-located on a single socket, that gives you other benefits such as your cache utilization. Imagine these workloads need to communicate with each other that lots of inter-process comms going on. Then you get huge benefits out of having them co-located and leveraging things like shared L3 cache. So again if we focus on getting the tuning knobs into OpenStack so that we can ask for these things that gives us a much greater performance and efficiency and we got the memory benefits too. So we keep those memory accesses low. Now next up I want to talk about another feature of the memory. So for those of you where the memory subsystems you've got typically when you want to do a memory access you go through an address translation. It could be something from say host physical to guest physical. And within modern processors you've got some hardware that helps accelerate that. The one I'm calling out here is the translation look-aside buffer. And what happens during a memory access in the processor is that the TLB is going to be queried and it's going to check to see do you have in cache this particular address translation that I'm looking for. If that's not successful it results in a memory read which is a far more sort of resource intensive operation than a really quick TLB entry lookup. The TLB entries themselves traditionally tend to be for quite small page sizes. So something like 4K bytes would be a typical enough use case. That results in you having somewhat limited coverage of your overall memory address space because you've got a fixed size of TLB entries you've got. You've got a fixed size of page table sizes. So that means you can cover a certain amount of space. One of the changes that is actually in flight right now so there's some patches on Intel01.org site there's also patches submitted to the community are to enable what we call huge page tables. So if you change the configuration in your system so that these page table entries are now covering one gig of space you have much greater coverage of your memory address space. That means you have far less TLB misses you go to memory far less often. So overall you get a much better benefit out of the system. Now thankfully OpenStack doesn't have to be aware of most of this. In fact applications don't have to be aware of most of this. What you really want for OpenStack is to be able to say we need huge pages. How much huge pages have we got in the system and do some resource tracking on that and make sure that you can allocate through huge page based memory to your guest. So if we move on then so we're looking really kind of low down to some of the features of the processor and if you move up a little bit and you look at one of the other really important points to deliver on NFV and other workloads in the environment it's the virtual switch. And the reason we're calling that particularly is if you look at this as a control point in the network a place that you can do access control a place you can do bandwidth provisioning. It's a really critical control element that we must make sure OpenStack is has again the right configuration capabilities to set this up effectively. I'm showing an example here of comparing something like OpenV switch with some acceleration options and patches to both OpenV switch and other DPDK based or style V switches. Mileage varies quite a bit on this but you can say that between two and 10x is the typical range that you can look for for performance improvement. It very much depends on how much you're prepared to go optimize in your guest for. From an OpenStack perspective doesn't really care. What OpenStack and what we care about from an OpenStack is that some of the key configuration items that change as a result of making the V switch faster now have to be configured and today we can't really do that. In fact there was quite a bit of work went on during the June cycle that we just didn't get it completed at the last minute but it's about enabling sort of a method for connecting between your virtual machines and your virtual machines and your V switch. And there are two that are in development for that. There's a user space V host which is the kind of one that we've developed with DPDK but from QMU 2.1 there is also a V host user. They both serve the same purpose just different implementations and for the next release of OpenStack we're gonna be working to enable them in the Kilo cycle. For now though for folks that want to try it out there are patches available that demonstrate how you can do the VIF bindings that need to change up in Nova so you can start with this V host style. And down on the in the neutron side there are patches against the OpenV switch mechanism driver to make sure that we can patch up the ports and the V switches correctly. Combination of both of these things in OpenStack are gonna help drive really high performance IO throughput through your system. Yeah I think that's the key thing for us. I think we really have to work on making sure we have the tuning knobs that you need. There's so much of the capability that's going underutilized today for NFV and for enterprise and cloud workloads. A little bit more smart at the API level better scheduling like Alan points out and then we can extract much greater value from the system that helps bring down costs and improve performance and efficiency. With that I'm gonna hand over to Steve. All right so Alan outlined the problem space. Adrian dived a little bit more into some of the technology and I want to talk about how we've been working with the community through the Juno cycle and how we'd like to think about working with the community through the Kilo cycle to continue to try and meet some of the requirements of NFV within OpenStack as the virtual infrastructure manager. So at a high level and Alan covered this to some degree but I want to repeat just to rub this in. NFV requires open standard APIs for use when provisioning virtual network functions on the infrastructure layer. It also requires performance, determinism and reliability features which aren't necessarily in OpenStack today. And it also shares this goal for a common or it shares this common goal for simplicity, agility and scale of implementation when deploying. So it has that in common with most OpenStack use cases and the core ideals of OpenStack as well. So succeeding requires a few things and the first of these is probably would seem like the simplest but is also in some ways the most challenging which is bringing together all of these different communities to work on the same problem space. So we have communication service providers, we have network equipment providers, we have OpenStack vendors, OpenStack developers and also telecommunications industry bodies such as Etsy NFV and OP NFV. So from an OpenStack development perspective one of the challenges is that that's a lot of voices to have in a conversation. So even in the Juneau cycle from a developer's point of view in OpenStack there's been a viewpoint that NFV users themselves don't necessarily have the ability to outline their requirements clearly because we have these use cases where for instance Alan mentioned VLAN trunking and there are a couple of different ways to use that and potentially use that in the context of OpenStack and illustrating those with one voice has been a challenge and that's what we really want to try and bring people together to work on those use cases and to present the clearer picture for the broader OpenStack community to consume. There also is a challenge in striking the balance between exposing the configuration knobs we've talked about for VNFs and the applications orchestrating VNFs to use while also abstracting them enough to keep those core elastic cloud ideals intact. One final challenge I would like to mention is that testing these things particularly in the OpenStack integration gate can be challenging. So the gate runs on public clouds which don't necessarily always have the hardware features we're looking to utilize in these tests and that's where we're heavily reliant on the operator community and the network equipment provider community coming together as well to help with functional testing and designing functional scenarios to help verify the functionality we're building. So with these things in mind at the Atlanta summit there were a number of birds of a feather sessions around NFE and we formed at that time the NFE sub team working on OpenStack. So that team as well as those initial meetings met weekly throughout the cycle using IRC and the mission of that group at least through this first cycle within Juno was to identify those use cases, define requirements based on them, create specifications based on those requirements and ultimately to create patches and drive them into OpenStack to help meet these use cases requirements. And overarching all of this is that that team is not something that's necessarily mandated by the board or mandated by the technical committee but something that works with the existing OpenStack development teams. So that means both projects like NOVA, Neutron, Heat, Glance, potentially others in the future and also efforts like the IPv6 sub team within Neutron. So in terms of what we worked on for Juno, there is in Juno improved SROV support to what Adrian was mentioning earlier. It is not what I would call the final picture for SROV support in OpenStack. There are certainly improvements that can be made. Support for multiple VNICs attached to an instance that are on the same network was added as well. The ability to evacuate a virtual machine instance to a scheduled host. So what that means is that the evacuate command no longer requires the admin to specify the host that the instance has to go to. It can effectively reschedule. The next challenge with that of course is that to reschedule effectively using the same decision rules you use the first time you place the instance, you need to persist scheduling hints and that's not currently done. There's a lot of work around guest CPU topology. So in particular, the ability to set the number of human nodes that a guest can span and also the sockets, cores and thread count of the virtual machine. So in terms of tentative goals for Kilo, and I say tentative because of course the design sessions only really started anger, particularly on Wednesday, where a lot of this work is defined. But certainly leading into the design summit, there's been a lot of discussion on good discussion and robust discussion on the mailing list around addressing the VLAN trunking requirement, which is certainly welcomed. The ability to permit unaddressed interfaces on a virtual machine. So that's particularly useful for instances that require the ability to process non-IP traffic at the moment, this is possible, but you do have to give that interface an IP address even though it's not necessarily used by the application. The continuation of the NUMA work I mentioned. So currently only VCPU topology is dealt with, but there's also memory considerations, large page considerations, IO device locality considerations, all of those things are required to make an optimal NUMA fit for a given instance. VCPU pinning. So this is a good example of that abstraction I talked about, so meet the elastic cloud goals while still also providing the functionality for NFV. So the idea of this specification, as it currently stands, is that you would mark this resource or this instance as dedicated, and that that means that implicitly, the scheduler has to find a host to fit that instance on and give it CPU pinning and dedicated access to those CPUs. As an enabler, particularly for accelerated V switches like DPDK, the ability to use the user space V host functionality is also a goal. Continuation of the work around the scheduler, so abstraction of scheduler interfaces, potential to split out of the scheduler and certainly making that scheduler more pluggable so that say something like the solver scheduler could be used instead. Continuation of that work is also a goal. And finally, configurable MTUs and port mirroring or tap as a service can also be referred to sometimes. So how can you get involved in this? So the NFV sub team was certainly the effort that we continued through the Juno cycle. There has been some expressed interest from more providers in getting involved in that space. There's other efforts like OPNFV that we're trying to bring together as well. So to those people who are interested in joining in and finding out how they can help, there is an operators summit session on Thursday morning at nine o'clock in the Hyatt Hotel where we'll be discussing those things. And I think we have a fairly long slot, something like 80 minutes to hash out some of these things, revisit how we did in Juno and how we can do better again in Kilo and keep improving things for this use case. So to summarize, before we get the gang back together up here for some questions, hopefully, network transformation is happening now. There are incremental requirements for OpenStack APIs, so this isn't necessarily a big bang that we're going to turn up at the next summit and say we're finished. But there are a lot of things we can do in each cycle by working together to propel these goals. So in particular, we need additional attributes for service exposure, policy and SLA-based provisioning for the application and for the orchestrating application as well, fine-grained placement, policy control and enforcement. Again, particularly important for these applications as we deploy provision and manage them on our clouds. Unified scheduling across compute, network and storage resources, which certainly seems like something that perhaps comes from the schedule or breakout, perhaps something else, but certainly that is a space that we need to address. Enhanced platform awareness for performance and determinism. So again, those requirements we've been talking about, but also tying that back to the earlier items, having enough knowledge in the platform at some point that the SLA defines whether I enable CPU pinning or defines whether I enable large pages and so on. And finally, providing those additional tuning knobs that the VNFs need to operate or to define their best operating characteristics while still again maintaining that elastic cloud goal. So for all of this to be achieved, again, I want to reiterate that we do need to bring together all of these disparate voices that are involved in the same space and to try and make these improvements from within OpenStack together, rather than as we have done and as we certainly found before we got together at the Juno Summit in Atlanta, to try and give one voice to these features, define these requirements in common and make sure that we're not pulling in different directions. And with that, I'll ask the guys to come back and join me for question time. People just want to use the microphones placed around the room. So now it's the crawling time, beer crawling time. So that's just a five minutes of Q&A before we all go and have some beers. I need the boot crawl. So questions? Thanks everybody for coming. I'm able to see you guys in a minute. Thanks. Thanks.