 Well good afternoon everybody. My name is Gary for those of you who aren't at any of our earlier sessions welcome and thank you for coming to Cisco's sponsored room for the afternoon for our first of two afternoon sessions. I got a great panel lineup for you today. I'm going to do just some very very quick first name introductions. I'll let the panelists each introduce themselves in a little bit more detail before they dive in. Also just want to remind everybody that the little card that we handed out to you as you came in, please fill that out. We will be doing a drawing at the end of today's session for an Apple Watch. Cisco people be careful about filling out your cards. And with that I'm going to do some very very quick across-the-stage introductions. Our moderator today Dwayne, panelists Lou, Uri, Daryl and JL. And with that gentlemen I turn it over to you. Thank you. Thank you Gary. Good afternoon and welcome to a panel discussion on real-world solutions for network function virtualization or NFE. I'm Dwayne DeCaput, Director of Product Management for OpenStack and Cisco's Cloud and Virtualization Group and we're excited to be hosting this panel today. We know that NFE is top of mind with virtually all service providers. A recent survey by Infinetics indicates that 93% of all service providers have either started to deploy NFE or want to deploy it in the next 12 months. So we know that NFE is on everybody's mind and in this panel today this is everyone's panel. I mean we're going to ask the tough questions. What are the barriers to adoption of NFE and then we're going to open it up to questions for Q&A at the end. So Cisco on behalf of Cisco and our good friends with Intel and Red Hat we're excited to be hosting this panel today. Cisco is dedicated to OpenStack. We've been involved in the OpenStack Foundation since its inception. Our Cloud CTO and Vice President Lou Tucker is also Vice Chair of the OpenStack Foundation. We're a top five member company in terms of number of memberships and essentially have been involved since the beginning of the OpenStack Foundation. So who here has been to more than one OpenStack Summit prior to today? All right what about more than three? What about more than five? Awesome. This is my sixth. You don't count Ken. So this is my sixth Lou and team has basically been there since the beginning and we've been contributing code across all the major services. Nova for compute, Horizon the dashboard, the drag-and-drop GUI curvature for Liberty that's from Cisco. I've done a lot of investment in Neutron and now we're expanding beyond Neutron working with a lot of components based on containers. COLA OpenStack services and container as well as Magnum networking containers and applications on containers. We've also made a large investment in engineering for Cisco OpenStack solutions and plugins to OpenStack products like Cisco UCS and Nexus which is nice because UCS and Nexus their data center staples. They're part of the flex pod architecture, the block and now we're bringing them into OpenStack environments. We've also done a lot of innovation around things that make OpenStack better with open source projects. Analytics and virtualization or AVOs. Cef early warning system to detect before a Cef pool fails. Smart scheduling, a better Nova. We also do a lot of work with validation like CVD, Cisco validated designs. We take Cisco, Red Hat and Intel products, we put them in a lab, we scale it up and we validate it and we stand behind the solution. We also do a lot of work with customers helping them with best practices and for them to be successful with OpenStack. Work with several large customers including Comcast, WebEx, PhotoBucket, some of the newer customers with our partnerships of Red Hat and Intel. The Eli and Edith Broad Institute of MIT and Harvard were the Broad Institute. This is genomics, gene sequencing, literally changing people's life with OpenStack, Cisco, Red Hat and Intel and also FICO, one of the user candidates today, also a good Cisco, Red Hat and Intel, OpenStack customer. We also have a rich ecosystem around NFEI featuring Red Hat and Intel. We've done a lot of investment in terms of other open source communities like Open Daylight and Open NFE. We've also contributed more than 40 applications to the open source community. We've done a lot of work with validation. We recently had a report from the European Advanced Networking Test Center or ENTC. They put out phase one of their report earlier this year talking about how Cisco VPP vector packet processing along with Intel DPDK can get 10 gigabits per second with a single core. Phase two was released just prior to the OpenStack Summit, which talks a lot about Cisco's NFEI solution. So NFEI, it's kind of the best of both worlds. You have the flexibility of network function virtualization, yet it's embedded within the infrastructure for a turnkey solution. We have Cisco UCS for compute and storage as well as Red Hat as a software overlay. Also with Cisco Nexus, a wide variety of networking controllers, APIC, OSC, Open Source Controller or Open Daylight, VTS, Virtual Topology System for a nice VX LAN overlay. A better OpenStack, which is detailed in the ENTC report, how it's easy to get started, easy to scale, high availability, very secure, all with a single pane of glass for management and integrated infrastructure monitoring. We have a great go to market between Red Hat, Intel, and Cisco, all certified, a very high performing solution. The entire solution is validated, all with a single support point of contact within Cisco. So without further ado, let's get into it. To my left, we have Lou Tucker, our vice president and CTO. Massive amount of experience, thinking machines, remember that scene from Jurassic Park, right? Sun, a little thing called Java on the core team for that, SunCloud. I'm also to my far left, we have Udi Ellsner from Intel, chief technologist in the SDN division on the Open Daylight board as well as technical steering committee, also on the IETF network service headers or NSH co-editor. You'll also be at the IETF in Yokohama next week as well. To my right, Daryl Jordan Smith from Red Hat, or DGS, as you're known within Red Hat, runs a vice president service provider sales, and in my far right, J.L. Valente, extensive experience up and down the valley, including CEO, venture capitalist experience. So let's get into it. We'll ask the first tough question. We'll send this over to you, if you don't mind, because you're to my far right. Okay. So is NFV really ready for prime time? What are some of the biggest challenges facing adoption and rollout of NFV and SDN in the industry today? Well, it's a good question. But just for those in the room, DJS is the short for my name, or people like to say barrel double barrel. So just add that. But no, in terms of is NFV real? Is it being deployed? Is it ready for prime time? I think we're beginning to see a number of the use cases out there that actually demonstrate that NFV and OpenStack is ready for prime time. It's the reason why we partnered extensively with Cisco in order to provide the VNFs and the stability around the platform is necessary to make it a viable solution with all the sales and support services from an operational perspective that allow you to deploy it within the network. I think some of the key things that we're seeing in terms of challenges that we're working collectively on upstream around security. I think that's a big topic that we're going to be continuing to develop. Service chaining, I think that's going to be another huge topic that we're going to bring together over the next 12 to 18 months. And you're talking a little bit about the availability of VNF and applications that sit in the cloud in terms of state less and stateful applications at Red Hat. We call them Mode 1 and Mode 2 applications. Mode 1 is really where you just virtualize an application that just sits on a virtual machine. Mode 2 is really where you have a stateless application that actually is able to replicate itself or self heal itself and migrate across the cloud in a dynamic manner. So I think those are the interesting things from our perspective in terms of where we see things going and the acceleration of the marketplace and coming back to your comment earlier about Telco's, you would probably, everyone in the room would understand that Red Hat Enterprise Linux is pretty well deployed in every Telco in the world. I mean, they've migrated a lot from Unix to Linux. And out of all of those customers that we have today, there isn't one that I can't tell you that isn't looking at NFE. So I completely collaborate what you're saying there. Excellent. Anyone else care to comment? Great. Is this one? Okay, great. So I think we are making very good progress. I guess this panel is a good opportunity to talk about multiple things that are happening in OpenStack, in standards, other open source projects as well and specifically also work that is being sponsored driven by the three companies involved. Now, on one hand, we are actually seeing not only trials, but also some deployment, which is good, but probably more interesting for you is if I go and describe some of the things that we as a community and specifically in this event as OpenStack, we still need to focus on and work to make them better. So one statement, and there was talking about stage one and stage two applications, there is a difference between what people expect from my cloud management system when they think about cloud applications and when they think about telco applications. And I don't want to take too much time right now. I just mentioned fewer and I guess there's going to be an opportunity with the other questions to get into more details. But as an example, people would like the stack to be career grade. What that means is different for different people, but it clearly means that it's better than what we have today. They would like to have more network awareness. They are more sensitive to the right placement in order to achieve the right efficiency in order to get higher infrastructure utilization to lower the cost to make the whole thing more feasible. They would like to see more network awareness in the placement. We have made good progress in describing what is a modern server architecture to the orchestrator. We have work to do on the networking side just to mention fewer and I'll mention more as we go. In the last couple of years, we have seen that big shift take place in OpenStack. Where as OpenStack originally was modeled purely after essentially an Amazon in-house which is a very much a compute and storage centric view of what is a cloud platform designed for web-based applications. When it seems like almost two years ago that the entire telco industry woke up and they realized they are spending too much money on fixed machines that are out there of special purpose hardware that have very long cycles and everything else like that have been harded, have been made telco-ready, carrier grade, but they just aren't keeping up with the changes in the industry and so they are moving to software. They discovered they have data centers. They looked at what platform do they move to. They chose OpenStack. From the foundation's point of view, we have had a working group on telco. We have had another working group on enterprise which is coming together because they both are trying to make now essentially a carrier grade platform that you can run these applications on. But I think the V&F themselves are lagging behind. The virtualization of the network services, that's where all the work is going to be. We can harden the platform. We can make OpenStack to be a great platform for this. We are finding these services now exist above OpenStack because they have to be orchestrated from there and then they require interfaces below OpenStack into the hardware. It is like a sandwich we are creating in which OpenStack is managing virtualized resources, but you need to have that communication between the top and the bottom and that I think is the challenge that we are all facing. Everybody is certainly very bullish. This is happening. No one is stopping this train. So now I think the race is on to really doing it in a way that we really can't deliver the kind of services with a reliability, scalability, everything we would expect from web apps. Yeah, and if I just to maybe close on this topic and move on to the next question that you may have is, yeah, it's real. The question was about NAV, you know, how much progress has been done and we discussed it yesterday with Red Hat as well is, you know, sometimes people say, well, I can't disclose names of customers and what's happening. So on overall, there are a number of organizations actually from the different vendors that have already been out there and disclosed what they're doing, what they've done, not just in park but also not just in trials but also in deployments. On our side from the Cisco standpoint, Deutsche Telekom, cloud VPN in five or six countries with something that is public. It's out there. Here in this country, Softbank was actually published a week ago or two weeks ago at the Layer 1, 2, 3 event in Dusseldorf about, you know, what they're pushing. In fact, it's interesting because many of those customers are also evolving toward the NFVPs. So they are doing physical and virtual, not just virtual, physical and virtual mix of a number of capabilities with different vendors, whether it's A10 or Cisco or 14 net, for example, and actually being able to migrate over time some of their assets to a virtual environment for enterprise services, very unique. And on top of that, it's not just actually, and we'll discuss it, OpenStack, getting to OpenStack, many of them come from VMware. You've got to be able to support that. So it's actually two modes to be able to get to OpenStack where they all want to be, but that's not where many of their infrastructure and environments and even knowledge operations are. So all of those elements here participate. There are many other customers, some many, some of them have been obviously disclosed, like Telecom Italia for us as well. So where you see actually this ramp up not just in in production by the beginning of the year or in production already at this stage. So just one short comment just to close on this maybe going back to Lou's comments. The layer above, for those of you who follow the Etsy work, etc., is the manual layer that that's a very big discussion right now in the industry. And actually, we have industry talk later this afternoon to start the community conversation here about what is it that we need to do in OpenStack in order to allow this other layer to be placed neatly on top of OpenStack. And I fully agree with the sandwich model too. Excellent point. Thank you. So speaking of SDN World Congress and Darrell, I believe you were in the house for that one in Dusseldorf. Yes. So there was an interesting report from British Telecom that came out, which fundamentally said that they're considering dropping OpenStack for virtual enterprise services in favor of enterprise or proprietary technology unless OpenStack addressed six fundamental issues. And they were pretty basic. I mean, security, upgradability, manageability. What are the panel's thoughts on on that report? I can essentially direct it originally at me. I mean, I know the individuals very well at British Telecom who made the report. I think I think that the press sensationalize a little bit about what they were really saying. I think I would be fair to say I think what they were really saying is help us as the community make OpenStack carrier grade so we can deploy it in our business. And they mapped out six areas and we're working on Intel as well. And I'm sure that Cisco has areas where they're developing solutions in those areas. But those out of those six areas five are pretty well defined in terms of blueprints and other things that we're developing upstream in conjunction actually with Cisco and Intel. And there's one that we need to work on and listen to and actually try and address very specifically. And that was around security and some of the things that they're looking to try and strive there. So from our perspective, from Red Hat's perspective, we saw it more as a call to action from a very, very important operator with very talented people who look at this every single day and work on it very vigorously. And it's an area that we're very keen to improve. And I think a lot of the efforts you're going to see as Lou was early intimating and Yuri was intimating is a lot more focus around how do we make an OpenStack carrier grade or build those extra features upstream for everyone to benefit from in terms of that sandwich that we were also talking about. When I talk to customers, they're always saying, well, they're afraid to get to the latest version of OpenStack because that's less stable and everything else. In a community development process, that's not true. The latest version is actually where all the fixes are made, where all of the security patches are made and everything else. Backporting those to the earlier releases is going to take a long time. So I think that we're going to start to see an inversion of the usual paradigm where we're going to see a lot of companies now moving as aggressively as they can to get to the latest version because that's where all of the changes that we're talking about being made are right up at the head of trunk. So that's going to be a change, I think, in terms of traditional software deployment. Yeah, since I was also in the room, I fully agree with the over-sensualization that happened with that event and to talk about some of the other challenges that Peter mentioned in his talk at that event. There are things, for instance, one of the challenges he was talking about was the idea that when you connect, disconnect, you have a nick failure, you have multiple nicks on that machine, you don't know which one is actually going to be the one that you are using. Again, all of this is community work, so I'm not going to repeat that every time, but there is work that is going on with all of you and all of us here about the more awareness of the data plane in terms of its capabilities and in terms of what's being used and in terms of what has been left. So, as Darrell pointed out, these points are already being addressed and as Lou pointed out, yes, you go the latest, you get some of it. There are areas that for NFV are somewhat challenging as we take OpenStack to new areas and one of them is, for instance, in the context of CPE, a customer premise equipment where what you have is the brain, some controller sitting someplace and it should have many tentacles out trying to reach all of those tiny deployments in multiple places. This kind of model versus the model where everything happens in the data center is a little bit new and the point that was cited by British Telecom in that case was, for instance, oh, really, I mean, each of them could potentially pick a different IP address. As they do this, I may have to punch individual holes in my firewall to enable all of those. This is no different than the work cut out that we have for ourselves in this community every day. You have new use cases, good news, so we need to add capabilities and we are all hard working at it. Yeah, in fact, we got plenty of examples of those. So I'm maybe contrarian here. I think there is a lot of work that an OpenStack needs to be done for that standpoint. Virtual CPE, absolutely. In fact, today, for most of Virtual CPE environments, when you have 100,000 or a million, you're not going to go OpenStack because of the footprint, the complexity and the cost, it doesn't make any sense, overhead is too high, but also even CMTS, virtual CMTS, there are actually a number of cases today where we still have a lot of work to do in terms of the distribution of the control versus actually the compute areas and so on, so that actually we can take, because it would be great if we can actually expand and have a coverage from the branch or from the CPE all the way to the back end data center. And there is work absolutely technically and also economically, so that service providers can take in next generation POPs. I mean, I can, in Europe there's one customer, 160 POPs. You're not going to get 160 OpenStack masters driving that. Doesn't make any sense. So there is work that needs to be done. And it'll be interesting, maybe there'll be an update at Mobile World Congress later this year. Yeah, I'm hoping that we'll somehow get together to formally answer Peter and publish a bit of a paper on that and try and move things forward. That's our plan anyway. Good, awesome. So we mentioned VNF virtual network function. So what VNFs are service providers looking to deploy first? We talked about virtual CPE and we were talking about some others yesterday as well. But what are the VNFs and are these just virtual instances of existing appliances they're deploying today? So again, I'll start. So from our perspective, I think that a lot of the use cases that we're seeing at the moment around firewall in particular. So that's a particular and load balancing. I think those are two particularly interesting areas of particular VNFs. But from a perspective more of how do you auto provision that across a very large scale environment? How do you take complexities out of provisioning that? Taking something that would take a typical operator 18 months and taking it down to hours, eight hours or whatever the number is. So we're seeing some interesting applications around virtual mobile VPN services with some of the larger operators and we're beginning to see some of those actually go live in terms of trial next year. So speaking of virtual network functions, so where are service providers looking to deploy them in the network first? Are these in the traditional pops or distributed pops? Or VNFs kind of changing the paradigm where the functions are in the network? Go ahead. I feel as I'm doing all the talking here. I'm very happy with the model. You go first. That's our pattern. So from a VNF perspective, what we're seeing from our perspective is a lot of those are being as I said earlier in the conversation being standard appliance-based applications and services that happen to have been virtualized. So typically those are typically deployed in a data center-based environment versus on the edges. I think you were, Jail was explaining earlier and the complexities associated with that. However, there are a number of operators that are looking at CPE models and trying to figure out how to address that and running into some of those challenges that we're trying to address. So from our perspective, they typically mode one state full applications that are sitting in the network. Typically there are, you know, their gateways, their IMS-based applications, they are load balancing, firewall-based applications and services with some very interesting technologies around provisioning and management and orchestration. Again, I think you were talking about the complexity and the need for that technology and certainly some of these standards bodies beginning now to look at that very seriously. So I've got a question for the people who are working here. In my view, we can't just take a VNF as it existed on a router or a switch, you know, a blade running in one of these and plop it down on a VM in a cloud and expect to get a decent performance. It's an opportunity for now for us to rethink those things. If you look at like large-scale distributed web applications, they're not designed at all the way you would have designed in enterprise vertically scaling out. They're designed to scale out right from the start. I know that we're seeing inside of Neutron, for example. We're doing a lot of things to look to see instead of Neutron itself started with a network service node, right? One node and then all the network traffic was going to go through all the tenant networks. And we know you can't scale that. And so then we're looking in the Neutron community itself how to make that truly a distributed function across all of the nodes. That seems to be the direction that the cloud is going. Can we expect to see the same thing then with the VNF themselves? Is that your mode two? Yes, my mode two. I think that's where we're going. I think that people are just rushing to get their environments virtualized first. I certainly think that these applications as they become mode two become more knowledgeable about what's happening around the application and certainly into the networking layer. I think that that's where we need to get to. There's a lot of work to drive that. And then we get into, I think Yuri was talking about what is carrier grade? Is it high availability of the application and services? Or is it hardware implementations that are highly available and completely robust? And I would like to feel that we can get to the stage mode two based applications where they live in the cloud, in the true sense, enable us to scale and drive that. And then I think coming back to your point earlier about people wanting to get to the latest version of OpenStack, the latest features are in OpenStack in the next release, certainly around real-time KVM, real-time Linux, all to facilitate cloud-based radio accessing networks. So those more interesting cloud-based applications. There is, just to that point, and it goes back actually to the manual piece, the VNF manager, if we look obviously at the piece from a manual standpoint, the VNF manager touches actually directly with, or interacts directly with actually the VM with OpenStack. And so the knowledge that you have to have at the VNF, even the descriptors, the payload, the characteristics of the VNF, whether or not the VNF is going to use some specific advanced, you know, CPU peening or whatever capabilities that they need, is going to have to be rendered at that level. And there is also, for example, on the Cisco side from a compute standpoint, we know that compute has to evolve to be able, we can't take generic compute to be able to run high performance. And actually we've demonstrated that so that actually you can take, you know, thousands of customers, enterprise customers or 100,000 subscribers, residential, because also it's going into the residential piece. And that's where, if you really want to run it as we said earlier, about 16 or 2017, if you want really to scale up those actually environments, you're going to need actually to have those design environments from the orchestration down to the computer areas. It's a game changer. So a few comments on these two, especially as computer evolution came up. But going back to the two modes or two stages, we could look at that from at least two different aspects. One of them is simply, you could even refer to this as social or simply the way new technologies progress in general. First everybody, I mean, very good analogy for that would be SDN. Somebody wanted to suggest, oh, let's do open flow based switches. Everybody goes, do what? Already making lots of money with what I have right now, too many changes. And so the mode one or the stage one really is where vendors are starting to say, yes, we actually believe, let us see what happens out there. Let us do something really quick, more for applications as they exist on a blade and start moving to more software model. Oh, that actually doesn't work quite all right, but I already have some customer interest. Then I'm moving to mode two. And now I'm really starting to really make it a little bit more stateless. I still am not done. I'm not done with scale. I'm not done with geography. Many times my application is really restricted to some, let's call it one administrative domain, be it a data center or some other place where you have that. Oh, I don't actually have the ability to do one of the things that Mano, as an example, would like you to do is, oh, we have an important sport event. I'd really like to scale up that capability and scale down another one, which in TechnoSpeak would mean, let me scale up this VNF. How do you scale it up? What do you want to do? Do you want to add more compute on the resources that I have right now? Or do you want to go and fire up new VM instances? Do you have the right descriptors that come from the top that say, what are the KPIs? What are the requirements under which I'm going to go fire up new VMs? Oh, the event is over. How do I scale back down? You look at OpenStack today. Many cloud, Amazon model as it started, everything was way more static. So we need to do a few things. We need to start, A, have a way to standardize a little bit better on information models, descriptors, how we describe the service required, how we describe the infrastructure available and make the best match. That is the conversation later this afternoon. We need to do other things too. There was the notion of compute evolution. That's where the data plane capabilities and performance become really instrumental in your ability to get the max out of the infrastructure you deploy. Not only that, you also want it to be flexible. Not only that, unlike the model we have today where you have VMs that are related to different applications. In the VNF, NFV model with the VNF as one of the building blocks, you have now few VMs that are actually related to each other. They are related to each other, not only in terms of compute requirements, absolutely on the network side as well. You need to take that into account. But on any individual piece of equipment that you have out there, you may find some pretty IO slash network hungry instances of something that potentially would be competing for resources. So we put that knowledge and capabilities into our platforms, as well as educating the orchestrator that, hey, this platform has that, that platform has something different, how you take advantage of that. Good point. Quick question for Red Hat. So the user, no, that's awesome. Walter, since we're gonna be cutting this off in about five minutes, if you've got questions, please come up to the mic. So the user survey for this summit just came out and NFV is the second most area of interest next to only containers, which everybody's talking about. So Red Hat's done a lot of innovation with Project Atomic for containers. Like, what's the conversation with service providers and containers? Well, a lot of service providers are looking at container technology for density, predominantly. And they're looking for what applications they might move to containers and operate as a microservice in a container within a platform as a service, as an environment that sits on top of or in conjunction with OpenStack. We don't see customers looking for one thing or another. We probably will see a hybrid. We see a lot of interest with technologies around CEF, putting CEF into containers and managing significant amounts of storage because the latest SSDs and other technologies from a storage perspective that are coming have microprocessing actually built into the drive itself. So they become, they look almost like compute nodes that sit in the network within a container. So some interesting use cases around that. I think from a container perspective, and this is my view, I think other people in Red Hat will have a different view. You know, they think it's already wearing to go. From my perspective, certainly there's a lot of work to be done around security with containers, a lot of work with networking around containers. I think it's an interesting topic that we want to invest a lot of time and money in and that's why we've launched our ATOMIC, which is our rail-based container-based technology from Red Hat. And we're beginning to have some very interesting conversation with a lot of the operators. So we can actually talk to them about the world of virtual machines. We can talk to them about the world of cloud. We can talk to them about the world of containers and how those things might converge. And the framework is a platform as a service. Great. So we have a few minutes left. We can take a few questions from the audience. It's not, I've got a question. OK. Not for Red Hat, I hope. No, not for Red Hat. In general, when we're looking at NFD applications, I'm wondering, where cloud computing has generally been there to be a platform serving multiple tenants. Multitenancy, we have a lot of different things that are going on. Networking is generally a part of the system infrastructure. It's not considered an application. And yet, are we applying sort of the wrong model there, thinking of it just as a tenant application? Ideally, I would like to be able to think we could pull that off, but I question that because of the need for placement that you mentioned. We've done a lot on placement. Well, that's a very difficult problem. You have to know what else is going on our node. Generally, in cloud computing, a tenant does not know what's happening on a physical host. We talk about network visibility. You want to be able to have that. So either we can start to abstract all of that and keep it secure so that multiple tenants can't game each other and everything else. Or we have to say, no, these are system administration apps. And these apps, therefore, are essentially owned by the service provider or whatever. And they have special privileges. And because that's something, even in OpenStack, we have to start taking a look at to be able to build that model in. So I'm just curious, do you think how far can we take this multi-tenant view or should we think of these things as being a different class map? You must have an opinion. I wasn't going to ask. I mean, OK, CPU pinning. I mean, a lot of these things. So I think we have multiple things happening with the question of the role the network should play sometimes in OpenStack, people used to refer to this, are we turning the network into a first-class citizen yet or not? So it appears to me that the real conversation starts with the way we model everything, the way we understand what capabilities we have in whatever infrastructure you deploy. And if we simply stick to the NFV example, I believe there is a need for more network awareness than what we have today. Whether it amounts to, it is also community-driven. So it's not, many times what happens with community projects is in hindsight, you see, oh, actually, we did make the jump that you were looting to. But if you put it that way in front of the community on day one, it doesn't necessarily happen that way. I think that what we'll see is features and we are driving some of them, like the placement as an example, where awareness also closed the loopback with Cilometer. There is need for analytics to actually understand what your infrastructure is doing so you could get the best out of this. When we look back in a few years, I think we would be able to answer your question much clearer. Jail, I'm wondering, because you're dealing with a lot of our customers, what is their view? Are they viewing this as part of their system infrastructure, or are they viewing this as an application they're running on their cloud? It depends who you talk to in those organizations. To your point, placement even starts higher up. You have to decide, for example, again, back to the central office, so the box is based on, let's say, a new customer coming in with different sites. Where do you put actually the workloads dedicated to them, like you would do with mobile, for example, mobile enterprise, and taking into considerations the network aspect plus obviously the availabilities, the affinity rules or non-affinity. There are a number of things, the data, what data needs to reside if you are across country, like in Europe and so on. So all of those aspects, so back to your point, modeling from the bottom up to be able to expose and top down from the services have to be reconciled. And today, there's no good way. You've got multiple ways, you've got different ways of people trying to, I was going to say Tosca, but yeah, there are multiple ways, but you're in Yang between Tosca, between other actually ways, even within OpenStack of rendering things. I think we still have a long way to go to make those systems actually cooperate. And that is actually because we are changing a model and we are going to more converge the infrastructure, we actually have multiple industries who are looking at the same infrastructure and whether we like it or not, they come with different legacy and so they have different information models, different application models in mind as well and we are out of time to talk about containers. We are unfortunately out of time. Just real quickly, a lot of the things we're talking about today, they're in a demo in the Cisco booth downstairs, so please go to Marketplace and check it out. Also, please contact your Cisco sales rep. I saw Brian walking around before it and asked them how they can help you be successful with NFV. And also, you have an invitation to contribute. A lot of these components like Cloud Pulse and Alvos, they're on GitHub, Cisco DevNet with APIs, so please contribute. And on behalf of Cisco and our good friends at Red Hat and Intel, we thank you so much for your interest and support for NFV and we hope you enjoy the rest of the summit here in Tokyo. Thank you. Thank you.