 Okay, good morning everybody. I'm going to tell you about distributed energy and the challenges it poses for OpenStack. Apologise for the croaky voice I've spent last evening shouting too much. So, what is network functions virtualisation? I'll describe that to you. Those familiar with networks will recognise the situation on the left hand side of this diagram, which is how we run networks today, that you have an appliance and it does one thing, so a firewall. And if you want another function, say a media converter, a media gateway, you have to install another appliance and it can't be anything else. And there's lots of logistics, you know, to install those boxes. If you're going to install on a global network, you could plan to take a year or two years to install those sort of boxes around the globe. And, you know, there's lots of issues with that. So, what we've said with network functions virtualisation is why do we need to do it that way? Why do we need to install all these physical stuff? Why don't we just run them as software on standard servers on the right hand side of this diagram, which is something you'll be familiar with. So, I don't have to explain the advantages of that. It's kind of the cloud model. I've made all the vendors' boxes disappear there and then I'll run in the software on the right hand side and lots of advantages there. And to the IT folks in here, they'll say, what's the big deal? But to the network folks, this is a very big deal. This is changing the model of the way we deploy things in networks very significantly. So, I call running networks today, installing physical appliances, like playing a game of Tetris. You've got all these different shapes of the sizes of boxes and you've got them continuously trying to stack them into your network. Whereas with the cloud model, it's much more uniform and much easier to adopt the growth and flex as demands change. And the bit that's important for OpenStack there is where you see those spinning wheels, the cogs that makes the automation installation of that software into the network is of course OpenStack as a role to play there. And you might be wondering what I mean on the bottom there. I'm comparing networks today a bit like Victorian machinery. And kind of where we've got to with NFV today is kind of like the PC with the green screen and where we're trying to get to in the future is kind of this advanced holographic display that allows you to select functions you want in your network instantaneously. So, why use OpenStack for NFV? Well, you see the architecture diagram on the left-hand side there. I won't go into all the details of that, but you should recognise, you know, the bottom left-hand corner should be very familiar to you. We call that NFV infrastructure, but to you guys, that's just infrastructure with the usual storage compute and networking there. On the right-hand side, we have the management components. So the virtualised infrastructure manager, we call that. The VIM is essentially what OpenStack is. It's VIM from our point of view. And you can use other components from OpenStack in the VNF managers and the orchestrator as well. But most of the area we concentrate on is the VIM there. And when we came up with this architecture, which is now about three years old, it was always assumed when we were drawing that diagram with a VIM in, that OpenStack would be one of those candidates. And we didn't want to reinvent OpenStack, so we didn't assert. We really wanted to reuse as much as possible and not go out and invent new stuff. But the big question is, are the differences between NFV and cloud too large a gap for OpenStack to bridge? And there's been lots of debate and this is bringing some of issues up in that area and other people have been bidding that this week as well. And that's very key. And the thing you've got to be aware of is you've got to be aware of the use cases you're using for NFV as to whether OpenStack applies to them. So distributed NFV, let's tell you a bit about distributed NFV. Unfortunately, it's known by various names, which is kind of an indication of how early and immature this technology is. Some people call it virtual enterprise CPE and I tend to call that in BT virtual enterprise CPE because we're virtualising the CPE that belongs to the enterprise customers. And some people will also call it the universal CPE or the UCPE because it's a server you put on the customer premise that you run virtual appliances on things like firewalls, routers, over gateway functions, 1x accelerators, that sort of thing, security functions. So distributed NFV means that we're putting the NFV functions right out at the edge of the network. So let's give you a feel for the numbers here. If we were putting things in the core of the network and we'll talk about the UK network, and that's a relatively small network in global terms, you might have 10 or 20 locations in the core that you typically put data centres. If we're talking about the edge of the network, that could be anywhere between 105,000 depending on how you define the edge. But in terms of customer size, it depends what you call a customer. So if we're just talking about multinational corporations, a typical multinational corporation might have 1,000 branches around the globe, might have 1,000 branches in the UK. So we might be talking about 100,000 sites there. So each site has a server on it, so it's a compute node. So we're talking about minimum sort of order of 100,000 there. But if we're more ambitious with this sort of solution, we could extend it into the SMB, the small medium enterprise market, and you might be talking in the order of low millions. And if you want to be even more ambitious and extended to the residential market, people have been talking about using OpenStack for IoT, and obviously you're going into the multi-millions. So the scaling has to scale tremendously as you go towards the edge of the network. And then your next question is, well, what sort of functions do you really need to run on the edge of the network? And I list some functions there. Then this applies to the enterprise market. Now, there's functions that run out on the edge of the network that have to be implemented there. So here's another view on those deployment options. So the top diagram shows where we deploy network functions in a service chain, where you couple together things that you need to make a complete end-to-end service. In the top there, as all delivered, we're from the cloud, and you probably all get your heads around how you do that. But the next one down, we're delivering it purely as distributed NFV, the virtual ICP running a server on the edge. And you might say, well, why would you want to do that? Why isn't it OK to run everything in the cloud? So let's just give you some examples why you might want to do that. You might not have a data centre or provider edge or infrastructure in the country. You want to deliver services to your customers. So the only location you can put your service is on the customer premise. And this customer premise can vary from being a branch site, which might be running a very low-end sort of server. It might be even atom-powered, or eventually be arm-powered. It might be quite a big branch, a HQ, or even a customer's data centre, in which case you've got the full, you know, dual-cord, dual-socket, it's the on-type of server there. And the other thing to bear in mind is what is the network infrastructure there. So in BT's case, a good example is BT Brazil. We've got 40,000 customers in Brazil. On the end of 2 megabit centre, satellite links. So if you're on the end of a 2 megabit per second satellite link, you don't want to put all your compute out in the cloud because it's just going to take the delays, it's just going to take you too long to that. So you need some local compute. And that's the idea of putting a server on the customer premise delivering our service. It'll be the compute on the customer premise, but we're managing it remotely. And the bottom there shows a hybrid mixed model. And this is slightly to happen. There's some functions implemented on customer premise and some functions implemented in the cloud. And there may even be functions distributed across the both in a cloud-like way so whereas the customer's premise becomes part of the cloud. This shows the architectural components in there. On the right-hand side, we've got what we do in the data centre end and on the left-hand side, we have two choices. We have the hybrid deployment where we put a server on the customer premise and you can see the bits and pieces in there, the hypervisor that you'd expect. And where we put everything into the cloud and there's no compute on the customer premise is the box very much on the left of this diagram. So when you're running something in the data centre then OpenStack's fully appropriate and that's the scenario seen in Verizon and ANT, AT&T talking about recently, now so that's great. But the issues with distributed NFVs, the focus of this discussion here is when you're running it on the customer premise. It may be a single server, maybe a small server and we're talking about potentially hundreds of thousands or more compute nodes that we have to manage. So that leads to these six challenges. I'm going to hand over to my colleagues now so I've got three colleagues coming up. I'll talk through two of each of these and then we'll come back to myself and I'll summarise at the end of it. Adrian, thank you. Thank you Peter. So the first of the challenges that Peter has articulated first is to look at the issue around how you bind the network devices within the guest, within the VM with the virtual nicks that the infrastructure is going to provide. So the reason why this is a particular challenge if you think of the type of a firewall deployment example it's really important that firewall understands what's the LAN connectivity it's got versus what's the LAN connectivity. And as you can imagine, pretty bad to mix those kind of things up. So one of the things that we looked at first could be help is the consistent device naming in Linux. And this is a set of conventions within Linux that helps you to understand which type of device you're connected to. So if it's a LAN on motherboard, it's an EM followed by some number. If it happens to be a PCIe device then it's P followed by the slot, followed by P followed by the number of that device. And with virtual functions that extends further it starts to look at the underscore followed by the number, or the VF number, the virtual function number. But we're not recommending that you would rely on this method to make sure that you have this consistent allocation of your nicks because the administrator could change and it's certainly within their option there's a capability in Linux to do that. And also it's a requirement that this is specified on the, but the kernel boot commands. And while it is the default, it would be pretty bad for us to rely on that kind of default in OpenStack. So to help look at how we want to try and address this challenge, the first thing we would say is does this know of a boot option with metadata? So for V and Fs that support the ability to look at things like configuration drive-related options and read the metadata as they're booting up, the tenant or whoever is deploying this service can look into that virtual function, what the device labels are. When they go through the typical process of instantiating that machine and you look at, you get to the point where you've created the port, coming back from that you get the MAC address. And what we can do with this capability then is to specify in the nova boot command the device label that is associated with that desired MAC address. And then when the VM comes to go boot up, it can go check the config drive and it can do the right mapping predictably what type of interface you've got. The other one to start looking at is this new capability in Nova that's planned. It's called virtual guest device role tagging. And the benefit here is that even when you're running the same type of V and Fs, the ones that can access config drive info, we no longer need to go and figure out what the MAC address is. So it's sufficient with this proposal to just add the command into Nova boot to reference the particular device label that you want or sorry, the device tag we call in this case and associate that with the network that you want to have this device boot up on. And then Nova is going to go off and create the mapping and figure out what that MAC address should be to reflect the desired mapping. And when the VM boots up, it can go read the config and it can figure out what that MAC address is. But what about the case where you're looking at a V and F that can't go and look at this type of config drive info and it has a very static view of how it wants these devices allocated. Now in that case, what's necessary here is first to go again, you have to go and look in at the device, understand what the PCIe address that they really want these LAN or WAN to show up on. And the proposal here is that we're going to make an extension in glance around the image metadata and be able to tag with the particular device capability like a WAN or a LAN with that known PCIe address in the guest that the virtual machine really wants you to boot it with. And then we leverage the capability I just mentioned about. You go through the boot process and Nova is going to create the mapping to make sure it's using the PCIe address for the guest that the guest really wants. So when the guest comes to complete the boot process, the LAN or WAN or management interface, whatever the one you're looking at, definitely shows up on the right location. So it hits on the predictability that's really necessary in this case. A related part of the problem with the binding virtual NICs is what happens when a NIC goes down. And three behaviours have been observed in the environments. The device could show up as the same device again, which is usually okay. It could show up a numerator as a new device, like some like Ethernet 3 in this picture, in which case the V and F may not know what to do with it. Or you can end up in a case where your network function just locks up. Now, when we looked at the reason why some of these things happen, it kind of feeds into what we'll talk about next. It's around the method in which these devices were the data model that we deployed it with. So there is an option to deploy with something like chaining neutron networks together to try and create that service chain. But when you do that and you want to make a modification, you get a lot of these connected and disconnect events. So the short-term solution is to look at more of an SFC option. I'll talk to that in a moment. But longer-term, and it's probably the most correct path, is we need to make sure that the V and F vendors can properly handle these type of events in a cloud environment. So looking then more at the service chain modification, part of what was set up was this idea that if you've got multiple functions, and this is before we had networking SFC, you could use neutron networks to chain all these things together. Quite a static way of setting the thing up, but if you wanted to use a neutron interface, that was really the only option we had at the time. Now, if you did want to go and make a change here, that's where you get all of these connect and disconnect events. And if you wanted to make a significant change, you could end up in a network outage of potentially more than five minutes, which is terrible from a service delivery perspective. So what we started to look at is all the great work that happens in the networking SFC community. Now, networking SFC, there's lots of talks and design sessions on that this week. It's this port group-based method to figure out how you're going to create a service path between your various network functions. So what I'm showing here, too, is that the way it's been architected, you can have lots of different backends and lots of ways of interfacing with your network. The one I'm just showing is around how to leverage OVS. Neutron managing OVS directly. You can use networking SFC to do that. So relatively easy to consume interface. Works quite reliably and predictably for this type of use case. However, we started to look at there's a lot of goodness in what it is today. When you look towards the IETF and the type of things that showed up in the SFC specs there, we identified a number of changes that I think we'd like to make for the SFC for Neutron. So there's a proposal on the table now to look at how we might move that forward, to do a second rev of that API. If you'd like to follow this mailing list on it shown up here, we're going to try and address features such as being able to do reclassification at different points in your chain, being able to contain metadata, a different way of being able to do encapsulations depending on the wireline protocol that you might want to support. The idea here is that we're going to work with this community, possibly move to a second rev of the API, possibly look at taking some of the ideas we're putting forward and see how we can modify the existing API, which could itself mean a step change. But what we're really looking for here too is to say, please work with us, engage with us and understand of all these extra types of features we think might be necessary and necessary. Which ones show up as the most important for you right now to prioritise that work. With that, I'm going to hand over to Tara Kan in HV Enterprise. Thank you, Adrian. One of the other problems that was articulated to us was that, as Peter talked about earlier, a lot of these distributed components of this CPE deployment are at customer sites. When they are inherently at customer site, under some cases they may be connected directly to the provider's fabric, but in a number of other cases they may be connecting either over the internet or someone else's fabric. And in those cases, you've got to be able to make sure that the communication between the customer edge and the next hop, wherever it may be, all the way to the cloud or perhaps to the local pop, that needs to be protected. So, in this one, this is one of those things that you have some issues. Since we are talking about a physical device with some host operating system running on it, one of the shorter term options is that you use some kind of a tunnel, some encryption, when you are shipping the CPE device and most of the times carriers are the one who ship this device either directly or through their partners. One of the encryption endpoints on the CPE device, along with the host operating system, and you create a tunnel back. Quite likely IPsec is most portable and easiest to use, but other options absolutely are available. But for longer term we need to be able to look at, in addition to having a tunnel, how is it that we can minimize the type of traffic that travels between the CPE and the cloud. In the next couple of slides we are going to go a little bit more on this different federated deployment where you don't have to open those 500-odd pinholes to be able to communicate and manage the remote devices. The fourth challenge goes into the scalability of OpenStack. I think over this session as well, and if you look at every session that OpenStack goes through, these summits, we find that a single set of controllers are supporting more and more compute nodes. For production deployment we are reaching very close to thousands. In test, yes, we have been able to go beyond thousands as well, compute nodes. But in this case, even going over to that number doesn't meet some of the requirements that we had earlier. We are looking at what options could be available. As we talked about earlier, the OpenStack, when it started, as we are all aware, they were trying to solve the problem of providing infrastructure as a service moving up the layers but doing it in the cloud. Perhaps in short term it is not appropriate to use OpenStack to manage the edge compute device. Essentially, then are delegating a lot of the features that OpenStack provides over to some kind of orchestrator. It could be the orchestrator that we talk about within the HC architecture, the NFV orchestrator. It could be a VNF manager or it could be a yet new thing, something that one of the earlier slides talked about, which is a CPE manager, perhaps a combination of some of the capabilities that VNF managers do but more focused on the edge use case. Obviously, some of the disadvantages are that we are talking about another new thing. Another new thing always creates the problems of what are the interfaces we are going to be using, how are we going to be working with the components that are on the other side. Slightly long or medium-term options go into trying to be able to use OpenStack on the edge. One of the ways of doing it is to come up with an all-in-one or a hybrid model where you have a very lightweight control plane sitting on the on the CPE device itself. That light with some of the OpenStack project, something like OpenStack COLA, which essentially enables the deployment of OpenStack services using S containers, perhaps leverage something like OpenStack COLA to be able to deploy just the core services. Nova, Neutron, of course you need Keystone over there and just the basic services on the edge device and then have the same CPE device be the compute node as well. So you are able to deploy a few VNFs over there and then look into seeing if the VNF and if they are working with the VNF vendors on what kind of lightweight VNFs can be deployed and perhaps making those available to the containers as well as lightweight as possible. Now in this case, this is doable with the efforts that are going on today. It's just a matter of being a careful design and architecture and some of the operational issues related to having trying to keep track and managing the different OpenStack instances that you're going to have which over time are going to be different flavors, different versions. The longer term option seems to be that a solution that was I believe initiated a couple of years ago but didn't get as much traction earlier OpenStack Cascading Solution which became a project in the last couple of years called OpenStack Tri-Circle. We feel that this is the more appropriate and more elegant solution to be able to address this and primarily being that with OpenStack Tri-Circle you have the top OpenStack with the additional Tri-Circle components a deploy on it that provides a proxy for all the local OpenStacks that are running at each edge device and this does the elegance of the solution is at the top side you're still using OpenStack API but now you have a single API endpoint instead of your orchestrator going over to thousands or tens of thousands of endpoints and in the bottom piece it uses the standard OpenStack API to communicate to bottom OpenStacks which helps us work through the issues of be it the scale that we talked about earlier be it security that we talked about earlier as well because it's REST API command that are going down below and these are standard OpenStack but of course Tri-Circle is not as active as we'd like it to be so I encourage we at HPE absolutely are looking to participate increase our participation and look for more community participation in it for the next challenge I'm going to pass it over to my colleague Arun Thulosi Thanks Eric Good morning folks so ChallengeFi is what we call start-up storms over the last few years there's been a significant increase in the number of mobile users in the way people want access to the information but most importantly the expectations of how you'd like your service to be the quality of service expectations have also skyrocketed if I cannot post a picture of the salad that I just had which probably a million other people had just now if I cannot post it on to my social network within the next 15 seconds it typically becomes my service provider sucks hashtag so that problem specifically gets exposed when you run into start-up stampede when a pop or a data center loses connectivity it impacts a number of users within that region and when the services start to come back up we do not hesitate I mean we want to refresh our request again and again in the sense the load on the infrastructure increases and the patients level of the users decrease so it is very important for our service provider to be able to address these situations so that the kind of quality of service we have come to expect can always be met so today when an infrastructure goes down as the infrastructure is coming back up the VNFs need to come back up they need to talk to each other they need to reconcile their status open stack services need to come back up they will have to figure out their consistency requirements and we can do today with the option that we have a short-term solution to be able to mitigate this problem the first is dynamic scaling of control instances as the load starts increasing in environment your orchestrator or your MANO tool that you have should be able to dynamically scale your control instances as required it could be for the VNFs it could be for open stack services in essence you need someone who is monitoring this to be able to respond to the surge in the workload secondly take advantage of some of the advantages that your networking protocols inherently provide you so for instance if you use IPv6 using neighbor discovery you'd be able to completely work through the art storm problem which is also one of the scenarios that could happen here however for long-term you should be able to so that the impact is felt only at the local level and not at the entire data center level so we talked about projects test tricycle which have a similar goal in mind where I can localize critical services specifically no one neutron services that I can push to the edge so that the impact is not felt all the way back to the powerful to the data center going forward and applied just to no one neutron services we should be able to scale open stack and every microservice within the open stack down into every region as best as we could and lastly because the VNFs are the ones that provide the actual service to the user we need to figure out a way to run a hybrid VCP deployment where not all your VNFs run in the data center some of the critical VNFs probably a van optimizer any VNF that will be impacted by this loss of service also run local in a sense keeping the storms local to the center that has failed so when we first discussed this slide one of the questions we had was around why is there a bus there what's the point it's trying to drive so that's from the movie speed actually so you need to drive a bus more than 50 miles an hour you're on an isolated freeway and then you see a huge gap and that's the exact field you have when you deploy your VNF and then open stack says this API is not supported so VNFs do not have the kind of development cycles that open stack has they cannot refresh their code every six months to meet the API changes that not just open stack any of the other fast growing open source communities introduce to find a way to bridge that gap for VNFs not to feel the pain have the ability to use the latest and greatest features security features or something that matters more to them without losing the compatibility that they've already had with an existing version and for us to be able to do that there are some short term solutions and a few other long term solutions first as a community we need to be able to back port critical and domain specific features as our users desire so it's great for us to move from you know Mitaka to Liberty and Liberty to Newton and Newton and beyond but that does not carry the same value for a VNF vendor who's already certified his code on an existing version so if it's a critical feature that a good portion of a user base is already on we need to ensure that it gets back ported into that release but today within HP Enterprise we have this challenge as part of our partner solutions where there are multiple versions of OpenStack that we need to validate on and we work around it by using a variety of different remote storage options so our compute nodes are essentially shells where they boot from a different storage volume based on the version of the controller that is currently under test it is a circuitous solution to this problem but it's seemingly working until we can get to a stable state and lastly as good as these advance notices are which come out saying this version is going to be deprecated as developers we keep chugging through on our own timeline so there has to be a feature or a facility within the code that activates a deprecation countdown timer and it just announces to the world that something is going to change it starts it much earlier in time a long term release option could be having a safe harbor release this is what anyone who has built a product is probably already doing we provide a customer a long term support or a safe harbor release with an OpenStack that we assure will have all the critical features and this helps us keep the engine running for innovation where every new release will introduce a number of features but a customer who's made a conscious decision to move to a long term supported release will get all the key features that he requires to ensure backward compatibility and the last is the ability to provide a cloud portability kit that can act as an overall wrapper around some of the APIs that we provide that I'll pass it back to Peter for us so let's just summarize the issues and solutions binding network interface codes to the network virtual network function very important to get that wrong very serious consequences you can do it today obviously but it's kind of dependent error approach I would say because it's dependent on writing your heat script in the right sort of way if there's some inconsistency of the virtual network function then you might end up connecting the interface that's wrong so definitely a concern now so those coloured balls are kind of my concern ometers so I've read down the green coding so I'm concerned about that today we can see in the longer term that some of these mechanisms proposed might be able to turn that to a green status securing open stack over the internet definitely have to be concerned about that today even if you think oh it's easy to implement IPsec you have to be careful how that's implemented and have a think about how are you isolating all the components correctly from the internet and it's even just using IPsec then you've got to think about how you manage your digital certificates etc so they all are things you won't get an intern to design that sort of solution design it very carefully but with some other solutions proposed again we can see a way of making open stack more secure over the internet service chain modification it's very important to us the whole point of NFV is that you have this flexibility to change your service in real time minimum downtime FSC is essential to that but still concerns about how it's implemented today needs a bit more management capabilities needs a bit more flexibility there is a way we can see we can go with that scalability of the controllers very essential architecture thing how many controllers do I need to install how do I regionalise them do I do it per customer etc and I need to get when I want to manage hundreds of thousands of nodes I need to get really good scalability I can't have 50% of my infrastructure just running in open stack it's got to be efficient but there are some good solutions we can see colour and tricycle there and I think that would take us to a really good point if those are carried through the startup storms are stampede lots of concerns you can do things to mitigate those today but I've still got some underlying concerns and this is the experience that we get in networks all the time with network equipment that you can stampede you do need control mechanisms around there to fix that and I'm not clear that we've yet got a roadmap that takes us to a solution that is guaranteed under all circumstances to recover backwards compatibility let's just be clear why I need backwards compatibility I'm talking about thousands of different customers all with different planned engineering works I can't get all those customers aligned to say we're going to upgrade you all in the same hour of the year the different source of customers in different segments need some want their maintenance windows in the middle of the night some want that as the busiest time in the middle of the night so we have to be able to run the same service different releases of the same service and you just can't do that today it may be an architectural issue you might find some clever ways around to solve that with the architecture but definitely one of concern today because if we can't upgrade our customers it's a real issue so we have to be able to upgrade them and we have to be able to do that smoothly and today with traditional network equipment that does take up a lot of operational time and that's with some of the vendors guaranteeing backwards compatibility so without backwards compatibility it becomes a nightmare so what's the conclusion and call for action so we've seen the several tractable and competing solutions at various stages of maturity we're making progress it's not really quick enough network operators are launching all the free services right now BT has, AT&T has, Verizon has so we have to move quicker I'm happy we've having a limited number of competing solutions that's not a big deal, the best solutions will win at the end of the day now we need something very specific to happen specific call for action here the colon tricycle will be really useful in this giving us the scalability so that's essential to have FSC service function churning essential to have as well so those developments we really want to drive so we have to engage with the open stack and the OP NFV that's a community that's generating open source solution for NFV specifically and make these challenges more mainstream and make sure they get addressed you're going to need continued operator engagement and that's why I'm here, I'm engaging with you because I know vendors can't solve this alone and finally I'm sure I've just listed six challenges here you may have heard some other challenges from telefonica but there's bound to be something we've missed here so if you think we've missed something let us know and we'll get in touch and we'll add it to the list of things that we need to fix so thank you everybody I think we've got three minutes for questions so colleagues come up in case is there any questions okay so this is Dong Hui from Channel Mobile and I have questions about what is the solution here and are you doing the virtualization for the connections I mean the P routers and when you do the VPN connections are you totally skipping that because when we do this kind of solution we need to end the service orchestration for that purpose we need both a steam orchestrator and NFV orchestrator so I'm asking here are you considering that yes so you know our solution our orchestrator will orchestrate the physical and the virtual elements on an to end basis so that's how we're considering it but I don't think you know how you configure the P routers or the physical elements is another question that's separate from how you manage the virtual CP right so when we do the legacy physical configuration we need to connect to the EMS I mean the legacy solutions and when we connect the legacy to the steam orchestrator other than the NFV orchestrator so if you can come tomorrow I can present how we do that I have a second question about the 600 chain because I saw you have 600 chain on the edge side probably you also have the narrow side how can you combine the two chain together I mean two chain both on the edge the other one is data center so how you chain them together are you using two way of managing or that's very interesting topic I'm also present tomorrow I'm wondering if we are not thinking about this more holistically because it seems like you're assuming that me as a customer I'm with American Airlines that I don't need other functionality at those edge locations so in theory if we start moving towards VDI type stuff functionality I need a lot more things over there and I'm not sure for you if your CDN means content distribution network or if it means something else but we need something like file synchronization file caching we need the ability to talk to SAS providers so there's a lot of functionality we need so the idea of having just these little micro instances no I'm actually looking at unless it's a really small airport or a really small reservations office at least two beefy boxes that may have Intel CPUs and as Intel CPUs keep on going those are going to be really beefy boxes that have tons of functionality solid state storage there's tons of things we can actually do at the edge and I'm worried that you guys are not thinking big enough about what the value proposition of NFV is to enterprises like American Airlines because it looks like you're still trying to solve just the networks portion of the world and ignoring us server and mid-range and desktop guys that's a very good comment heard that many times and I'm trying to walk before I run and I definitely see it evolving to that picture where we are addressing we get customers saying we run more of our IT capabilities on your platform and I think we'll get there and I've got it the other way as well when customers say well I've got you run your network functions on our IT platform and I think we'll go there and actually I've got some customers that are doing that matter we are running managed network services on their IT platforms and we have done that and the other way around we've not done that yet it's early days but I can definitely see it evolving to that point Hi, I'm Cathy Cacciatore from OpenStack Foundation and my question is a link on there for the first challenge for PCI card tagging how are you getting these requirements into the Nova, Neutron and Keystone etc projects I'll turn the question around how would you recommend we'd get those requirements in We had an extensive conversation about that Monday but we can talk about that offline but also I think OPNFE joining OPNFE and joining with AT&T and China Mobile and other telecoms bringing all the requirements in together with one voice is probably a good idea they are working together closely with OPNFE to facilitate the processes to make that happen I wondered if the vendors were engaging the projects I think there are multiple options that we have to get these type of requirements in there are people working like you mentioned in the OPNFE community and there's a lot of great work getting that brought into what we've just established in the product working group where there's going to be a very specific NFE track or a telco track where we can start looking at these things and work going directly into the various projects like you mentioned there's actually two blueprints in that targeting on one of those challenges and you'll find with lots of the different challenges there are people going in direct that are very active in the OPNFE community so it makes sense to go make it happen I think the way of looking at these different use cases and helping to prioritise what we can do per a different phase that's an important step it's one we have worked on in the past I think it needs more attention to be able to prioritise things on the input into these development cycles so that we're not hitting this big backlog at the end Okay, we'll talk after Thank you You mentioned didn't mention MANO much in the distributed architecture you presented to us Will you also think about distributing the MANO functions? That's a good question probably I think yes In this case a couple of ones they've talked about scaling and other things we specifically didn't go out to MANO and talk about but right now the way we're looking at some of those MANO functions which Etsy has not clearly articulated on how they relate to distributed NFV so those we're right now kind of bundling them into some unnamed thing called CP manager they may be and that perhaps would be regionalised but we'll like to move the MANO functions as much to the cloud and then manage these devices from a remote location Okay, thank you Okay, I think we have to move on now otherwise we'll be running into the next week Thank you