 All right, well it looks like we're pretty much in capacity now most of the back. There may be still a few seats in the back for those of you who are coming in. Good afternoon and welcome to a panel discussion on what's next in network virtualization. We've got a great panel this afternoon. I'm Eric Hanselman, 451 Research. We've got from this side, of course, I've got it right up here. Balgie Servicermanian from Cisco. Dan Mihaidimitriou from Mitokura. Mike Cohen from Big Switch. The Nachi Ueno from NTT. And I hear Motoki from NEC Central Labs. So we're going to kick it off with a set of questions. We're going to have a little time for audience questions at the end as well. So keep your thinking caps on and see where we go from here. So I wanted to start off with a little bit of background in SDN and network virtualization broadly. We've come through a time period where we've gotten used to what SDN is all about. We've sort of in the midst of a pretty hefty hype cycle right now in terms of a lot of the messaging that's going on. There's a lot of confusion. But what I want to talk about here today and get the opinions of our panelists is really what's coming next. What are the realities and where does this fit within OpenStack today? As we stand on the cusp of grizzly with Folsom, we finally got a fully fleshed out network abstraction with the Quantum Project. But there's a lot of places we can go with this. So we're talking about sort of where that fits and where we're headed. So to start off with, we seem to be a switch, excuse me, a split in the approaches to SDN. We started out with a lot of enthusiasm for OpenFlow as really sort of one of those initial manifestations of what we could do with low-level network control. And we've moved on to an appreciation of a greater interest in overlays. There had been a lot of resistance initially to overlay technologies like the X-ray and NVGRE. But we've gotten to a point at which we're starting to see that those may be useful abstractions and useful ways to be able to connect. Where do you see each of these technologies fitting? And are there use cases that are appropriate to individual approaches to this? Or is this something where we really need to be some mixing and matching as we move forward? So we'll start in a nice linear order. Paladjee, why don't you head up? Yeah, sure. I think if you look at the... What does a customer really want to achieve in sort of driving what's involved in things we do? I think there's definitely a trend towards automating the network along with the rest of the stuff that data is running, storage and compute. So an open flow is a small measure that you have in trying to do data plane management. And as you start building the real solutions that you want to build, you need to have various solutions. All day is one aspect to building solutions where you abstract away the network, the physical network, and build it all later up on top of it. The advantage you get, I guess, is in some ways is you get it fast and quick as after speeds. But I think what we at Cisco believe is that eventually you need to have the physical network being fully integrated into the overall orchestration part of it. So if you look a year out or so, I think you would see more fully orchestrated through. OpenStack is still sort of the orchestration layer, the cloud orchestration layer, but you would have physical as well as virtual network being completely orchestrated into it. So you don't have to worry about, should I do an overlay or should I not do an overlay? It should be about, you know, does the network support your needs, which is building a self-service portal or fast cloud deployments. Yeah, well, I'll hand it up to you. Someone who's a little more overlay-focused. Right. Yeah, we're entirely overlay-focused. So forgive me if my answer sounds biased, but... So for building virtual networks for tents, for in a multi-tenant environment, I believe that overlay is definitely the right approach, because the state and the data only concerns the edge. It doesn't really concern the core of the network. As far as open flow itself goes, I think Balaji touched on the point a little bit, it's all about the benefits to the customers. So initially, maybe there was a lot of excitement about open flow as a solution, but the problem itself might have been mischaracterized. The problem is, I believe, that customers want to automate network configuration and provision, even of the physical network. But to automate the physical network does not automatically lead to the conclusion that we need to centralize the control plane functions. So if you can have the tried and true decentralized control plane based on things like OSPF, VGP, and so on, but have a good way to automatically configure these things, then I think you don't necessarily need the central control plane in open flow. So obviously, our take on this is probably a little bit in between in that your overlay technologies have been excellent in the network visualization industry as being something easy to deploy, a great starting point for people to consume the technology, because it works on your existing hardware in a kind of a non-invasive way. However, as Balaji said, it really doesn't solve the entire problem. It really leads you with two separate management planes now. You actually have to manage your overlay networks, and you have a set of them allocated to different tenants, and still manage and maintain your physical network. So you actually now duplicate two separate problems where before you had one. So by integrating these technologies and actually having open stack orchestration and quantum orchestration be able to reach down directly to the hardware, we see that as the direction this technology really needs to go in the long term, and as big switch that's the direction we're going with our technology as well. In terms of open flow, which has been discussed so far, we're actually much more bullish on open flow than folks have been previously, and the reason is we see open flow as a way to offer a standard protocol, and actually having that standard becomes very important because it offers a degree of interoperability that has not been present in the networks before. And also the idea of centralized management offers a lot of operational benefit that have also not been realized in networks previously. So obviously open flow is not the angle we have to be all for network virtualization. It will not be the only technology employed, but it actually can be a very, very useful tool in integrating physical and virtual domains in a network virtualization platform. Okay, so we are carriers, so we have very large VPN networks, so maybe VPN is a kind of virtualized network, so we have the main history of the managing virtualized network. So what's new to the SDN is the homie quantum is really important because we can share the penetration view of the client use case on the quantum, then we can realize it to the actual network. So we are focusing on investigating many technologies including overlay technologies, open flows, and now we are focusing on the BGP-based SDN overlay, which will actually be doing many experiences with BGP and PASP and stuff. So we are investigating to expand it to the inside data centers, then we can manage all network by senior operations. So actually one of the drawbacks of the overlay technology is the burden for the operations. So the operator should manage the two layers, but if we choose the BGP for the overlay, so the burden will be actually decreased for the operators. I think overlay technology and open flow can coexist. Oval network is a modeling, a network modeling which defines how to transfer the packet, keeping the separation of multiple networks. On the other hand, open flow is a way to provision, provision of network equipments. Open flow can use defined rules. Packets from A should be forwarded to transfer the virtual network X. In addition, open flow brings an open standard way to control network switches. So many equipment or software which can talk, open flow are released. Open flow we can control and monitor our network as a whole in a centralized way. So I believe both technology can coexist from now. We've heard some different perspectives on where this fits. I think one of the things I'm hearing from across this is that management winds up being one of the concerns in terms of bridging both an overlay capability and underlying network control. If we have to get the overlay to the end, you have the edges to be able to either tunnel between data centers or be able to transit across the virtual edges we've created, does it make sense to just do overlay the entire way and why not live with a lot of the difficulties or potential challenges and maybe some of the performance issues? Isn't that a better path? Well, we'll start with you first. We'll roll it back the other way. Both overlay networks and lower-level provisioning like open flow are important, as I said before. Provision and managing the physical switches are still key to control the network performance like QoS or bandwidth. The network... The network controller should play this role and hide the detail of the physical switches. So, overlay is not sort of the old problem. Well, actually, I'll kick that back over to you. Bring a little more hardware perspective. Is hiding the details of the underpinnings of the physical network a good thing, a bad thing? Do we really have to know about the guts of it? What's an open flow developer got to know about how that fits together? I think, again, I'll come back to how the customer eventually run and deploy this network. So, they have physical network, they have these virtual networks. I mean, they have these virtual switches as well, physical switches, physical devices. You need to still maintain the whole thing. I think the approach that, you know, for Dan, I mean, being a software vendor is, you know, that's the control point that you have and it's perfectly a legal way... I mean, it's perfectly acceptable business model to say that makes sense. But from a customer point of view, they have both. And as a vendor who actually has, by the way, don't characterize as just as hardware vendors, you know, we have virtual edges that... we have iProvisors switches that run across all iProvisor platforms. And so, we have the ability to provide both. So, if you have the option to choose a platform or solution that provides you into invisibility, into intrusion, why not? So, I mean, don't choose it based on your business model or business practice. Alright, well, Dan, that's... Sure, I might have something to say about that. I thought you might, but, you know... No, I think... I agree that overlays do not solve the whole problem. Definitely not. That said, the rest of the problem, how to manage the physical networking. I think there are various ways to scan this cat, right? And we're probably not going to come up with the right architecture here today. But what I'm saying is, I don't think that we need to do open flow. That's not the only way to solve that problem. And in fact, some of the things that Motoki-san touched on earlier, things like provisioning QOS and things like that. Actually, you can't do that with open flow anyway, right? You can't provision QOS traffic classes and things like that right now on switches with open flow, unless you've made some extensions, which some vendors might have. So, I agree, we need a way to be able to do that, to automate that. There's no standard API for that right now. On the other hand, many switch platforms are becoming more open, running Linux, so you could probably write your own software, you know, and orchestrate it in that way. I just think that there are multiple ways to attack this problem. Maybe one single solution from a vendor like Cisco, maybe Cisco has the capability to build that one solution in their own ecosystem. But in a multi-vendor ecosystem, I think there is no solution right now. It might not be one single software control platform that handles the whole thing. Maybe Quantum is the thing that ultimately brings them together to manage the overlay and the underlying network. Maybe. So, I think regardless of how this debate ends up playing out, hardware acceleration, hardware acceleration in some kind of network version of technology is going to become important. Tunneling in software is just not a sufficiently efficient solution to long-term takeover this market. So, if you come to that conclusion, there's a number of places you can go to it. You can say, well, if overlays are the answer, then switches should understand how to in-cap and de-cap packets. And that's one direction that the market can go. And actually, you already see vendors actually adopting technology like that. And I think that's actually very beneficial to everyone. In fact, it's not even opposed to open-flow technology. So fundamentally, open-flow talks about having the centralized control plane and creating a standard about how it speaks to switches. Now, some of that standard could actually be about in-capping and de-capping packets, for example. So it could live alongside encapsulation technologies. It's not like you have to choose one or the other. Now, it's true that in an all-open-flow network, you may choose not to use in a encapsulation method for many reasons, but you can actually run both technologies alongside. It's not going to be a pure black-and-white answer long-term. But I do think hardware acceleration of these capabilities will absolutely be essential to keep networks functioning at line rate. Not just that. Comments from a carrier perspective. And I think what you'd identified about being able to do overlay in carrier class capabilities like MPLS. So actually, we're going to use many technologies for maybe one customer. We're going to use MPLS, one customer. We're going to use BX, something like that. As Dan said, open-flow is not only an answer. So maybe we can cooperate with multiple technologies. So we propose the meta-plug-in for the quantum. By using meta-plug-in, each quantum network has flavor. So flavor corresponds to each vendor plug-in or open-source plug-in. So we can use provision many types of the network on the one single control plane. By this control plane means the quantum. Because the quantum-managing tenant facing the models and provision it to the actual network. So one plug-in may be open-flow-based. One plug-in may be network-based. So that can be managed. So actually, let me do a quick level set with the audience. How many people have actually gotten their hands into the quantum project today or actually living with plug-ins in one form or another? Can we get a quick show of hands? All right, good. So we've got a reasonable depth here. Which actually leads me to what I think is sort of an interesting next step. You talked about meta-plug-ins. Have we gotten to a state in quantum right now where there's too many plug-ins? And it seems like from a high level perspective, every veteran on the planet is putting together a quantum plug-in. How are they going to handle virtual networking? It'll be in a quantum plug-in, right? So especially when we look at things like net state databases and a lot of the networking, how do we expand into a much larger environment? Have we gotten to a world in which it's getting a little too fractured right now? And it sounds like, Netsun, you'd say definitely with a, if you're looking at metas, you're looking at a meta-plug-in. Yeah, basically we welcome the newcomer for the plug-ins. But it also has responsibility to the pushing code for the maintenance. So lack of the document or no pro, nothing to be tried, way to the try. So something like just by this product is not the way. So yeah. I was going to say that Cisco already has a plug-in where we plug into multiple different underlying stuff on our side. You know, we have a Cisco plug-in which has UCS plug-in, Nexus plug-in, Nexus 1K plug-in, and a few other plug-ins. So we already have developed that technology that's already, I guess, a lose there, and he's worked on providing, I think it's probably a blueprint, I think some coming down the road, where you can plug-in multiple quantum plug-ins into a quantum plug-in. I guess from my perspective... We have the plug-in plug-in plug-in. Procursion is your friend. From my perspective, I don't really see too many plug-ins as a problem. But it certainly... It creates a management issue from a code perspective as if they're all actually pushed into quantum and someone QA's them all and now there's something to maintain. And that being more of kind of an open-source management issue where you move them out and manage them separately. But actually, from an open-stack perspective and from a customer perspective, the best thing we can have is... And even your solution you might want to work with should actually work with open-stack and work with quantum. And I actually think that's the best of all worlds. And then it's actually... The vendor's challenge to actually make their technology relevant in quantum and actually show that it has a differentiated capability. I'll bring on the edge of a comment. Go for it. I think your question about whether there are too many plug-ins boils down to are there too many vendors? Right? Quick show of hands. Too many vendors? Maybe there are. But don't worry, some of us will die off at some point. At least the start-ups. No, but seriously, I think... I said some, not all. Getting both the support here, so this is good. No, but in all seriousness... Funding. We take credit cards. We just got some, so we're still around. The quantum plug-ins that are there right now, they don't all do the same thing. In other words, you can't really swap one out for another. You are adopting not just the quantum plug-in, you are adopting a whole different architecture for the way that you design the network. And what I was saying earlier is that right now there is no way, other than the Cisco Franken plug-in, to integrate both control of the physical plane and the overlay plane, if that's what you're doing, from two different vendors, there's no way to do that right now. So perhaps we'll get to a standard for that. At some point, maybe the meta-plug-in, let's say. I think the point was, for example, since you can only do one plug-in, that plug-in cannot actually solve everybody's problems. So unfortunately, you know, we have to expand that space, right? So you could argue which is the right architecture, but definitely there's no lack of... Multiplug-in is not a problem. Multiplug-in is not a problem. I think quantum S&N is an emerging phase, so overall growth is important. So now we are gathering many use cases through various plug-ins. So I believe it contributes to the progress of the network virtualization. And many plug-ins may be reviewed or refactored at some point in the future, but now it is a phase to provide options to users. So I think agree with all of you. So is this a problem that goes away if there's greater definition to the abstractions that quantum presents? And I guess the other side of that is, do we wind up limiting our options if in fact we start to create more definition around what quantum offers? Let me ask this question. So both from Red Heart is working for the modular L2 plug-ins. So in future, some plug-in may provide the only L2 plug-in can be one driver of the modular L2 plug-in. So we can combine many plug-ins. So something like that. For now, if we have a new function X, all plug-ins should be updated. That's definitely a problem, but we are working for the modularities where L3 stuff and L2 stuff can be combined. So then each plug-in can offer the new functionalities which is added on the communities. I guess the other comment I would make is that I actually think quantum has done a very good job today sort of choosing the right level of abstraction. The APIs are sufficiently abstract to define the high-level tenants of networking without dictating that they map to traditional networking structures or formulas, for example. They actually are sufficiently vague to support kind of new SDN architectures and very traditional switching architectures that are relatively static. And you actually think that is the right level and it is not easy to achieve. So kudos to the founding team members of quantum. I actually think they've done a great job of finding that balance and actually maintaining it. I agree that the level of abstraction is good right now and that's mostly because the quantum APIs focus on the tenant-facing functions, right? So you got to make those abstract and implementable by everyone. Where things get interesting, as I said, is when you start talking about the operator functions where the architecture really starts to become more visible rather than completely invisible as it is to the tenants and that's where quantum is still weak on those areas and hopefully we'll be addressing those soon. Well, and I was going to head to another question. Although you talk about the operator perspective, that actually brings up an interesting point. Last year at the Open Networking Summit, Dave Ward made the comment that resource control in networks is really important because as he put it, he sort of brought up Silverlight as an example where you can dial up performance and it's left open to the developer to be able to pick what they want in terms of performance. And of course, if there's a knob that goes up to 11, every developer is going to say, hey, it's turned up to 11, right? As he put it, he did not want to see network control to a bunch of drunken frat boys coding Farmville apps. Is there capability, I'm presuming from the operator side, or more that we need to do within quantum to be able to manage some level of the resource allocation pieces? And if so, does that need to integrate with upper level management? Is that orchestration? Where do those kinds of resource management pieces fit within OpenStats? Yeah, I agree. I think right now, most of the solutions are day zero provisioning solutions where you click, click, click, you got a talent set up, but you don't know where the VMs are getting placed and how does the physical network, for example, are behaving. And the beauty of the thing is it's not just day zero, it's day one, two, three, four, five, six. And how does your app behave every day? If your app behaves perfectly today, but then it doesn't talk tomorrow because of some congestion, because VMs could be moving around and things like that. So I think that's the thing that nobody talks about. Right now, there's no maturity of the solutions that you actually worry about that. Right now, it's more like, hey, can I get something up and running? Hey, it's cool, right? Now, you know, real production enterprise environments, you need to worry about app performance consistency, and so having that good tie-in between the physical and the virtual and network resource issues will definitely come into play. It's not coming to play today because nobody has... I mean, except for a few instances. We haven't hit those limits yet. We haven't hit those limits, so... If I understood the question correctly, just make sure I did, you know. No, no, sorry. You're talking about whether arbitrating access to shared resources that are limited is something that you want to leave up to the developers, essentially, the drunken frat boys, right? But I think that anytime we have a scarce resource that we're trying to allocate, the price of pricing has to come into effect, right? So pricing is something where we can provide differentiated services using that. So if somebody wants a faster network, they got to pay more. In that sense, sure, give them the access and then we can design something that takes pricing into account and available resources and, well, prices accordingly, right? So in that sense, I don't feel like giving this level of programmability to the users, the tenants, is a problem as long as the system, perhaps quantum, can arbitrate those. Well, I think that's my larger question. Is that something where quantum arbitrates it? Is that a larger resource construct that exists within OpenStack? Well, it could be. I mean, we don't have that yet as a concept in quantum like that of different classes of services, but perhaps we'll be going there. Yeah. I guess I see this as a direction quantum probably should go in the future. It sort of takes the form of, you know, provider tools to actually, you know, manage tenant resources, and, again, at an abstract level, but you'll be able to define different levels of resource for different tenants and actually have the underlying network enforce that in whatever way it can. It actually touches on the larger point of sort of bringing, you know, effectively server administrator teams together with teams that have been managing network separately and actually having them work together so you can offer these APIs because, you know, any API you create here will actually, you know, will inherently touch the physical network. So you'll actually need coordination between multiple different groups to achieve it. So in the grizzlies, quantum has a schedule of functionalities. I think this is a great improvement so we can schedule actual resources and we have multiple routers on the DHCP agent on the scheduled matter. And I think there's also session about the integrating NOVA scheduler and Shinda scheduler, quantum scheduler. So I think the scheduling is the big key on the cloud because we can utilize the use of hardware networks. So maybe we are also very interested in the improvement of the scheduling, gathering many actual log or information from the cloud using maybe some meta or some kind of big data analysis platform. I have a very similar opinion as Ueno-san. I think the network controller should maintain is responsible for network allocation, resource allocation but the quantum, it's very important. So quantum need to provide it. So computer and storage use network so we need a way to request network to allocate network resources. So the API is very important so quantum should provide it. In addition, I expect network distance from computer and storage becomes a key factor to schedule computer storages. It is required to achieve the total performance. And actually Motoki-san, you raised a really good point about distance and the physical nature of networks that start to come into play. There's been a lot of discussion about hybrid deployments and to clarify things when I talk about hybrid I want to identify cloud capabilities in some way and then connected to some sort of back end that's either hosted, managed color, what have you, but at least outside of that cloud environment. They may be in the same physical facility but you've got to span that gap out of the cloudy world. There are a lot of challenges in terms of really identifying real underlying network performance when you start to make that jump. How do you deal with that in an open stack world and what are things that we're working on now or starting to look towards in the future that could handle that and we can start back from your side Motoki-san, if that works. In my opinion the network connectivity between data sensors should be a part of virtual network. So if we define a virtual network across the data center the network controller provision of network provision of data connection across data centers in addition to provision inside the data center I believe it provides a simple view it is important to provide a simple view to users. Now we are developing this concept on top of the open core controllers I think he's achieved this modeling of network connectivity on several sites and delegation mechanism. Yeah, I agree with Motoki-san but the answer is quite simple. The VPN we are doing nowadays connecting many sites using VPN so and also we are working the VPN virtualization but not virtualization many guys joining the session we have session tomorrow evening so please join. So we have many wide variety of VPNs if we choose NPS BGP so actual connection will be tuned up using the best path selections so the answer is quite simple for us. I definitely think this is one of the important items we are hearing over and over again from customers they are interested in there are a number of technical challenges to solve as you get there you mentioned latency speeds as you operate across data centers but there is also a question of the same vendors technology on both sides you are actually operating across different technologies so obviously something like VPN can work well but it depends if you are using tunneling protocols making sure you are actually virtual networks have coordinated information to sync across these different environments potentially across vendors can certainly be a complicated problem and that is obviously where quantum can step in and actually make a bit of a difference so a short answer for us it is definitely top of mind it is something we are working on hopefully we will be doing it with folks in quantum as well speed of light issues we start dealing with I figured if we have got a project name like quantum we need to find some of those badly behaved neutrinos that the LHC found and get over the speed of light stuff as these guys have said we do want the hybrid cloud model that said can we really have a single controller that handles both sides of the equation probably not just as Mike said maybe they are different vendors there are certainly different administrative domains probably certainly different failure domains so these two things really are separate and I think even within the same administrative domain of one provider if there are multiple data centers that may fail independently so it is a problem of orchestration and federation maybe quantum can help with that if everybody in the world is running OpenStack in quantum then that would be great but we will see how that goes maybe yeah I think definitely the customers are starting to look at first of all when they want to obviously build a prior cloud and then extend to another data center that is the first step they are looking at multiple single provider and then multiple cloud providers I mean that is what they are going to get to I want to be able to move my workload within the data center across the data center my one data center and then to multiple providers as well there are multiple things issues which has got nothing to do with OpenStack that is there speed of light issues encryption issues so there is a whole bunch of issues besides extending the L2 or L3 the security policies you want to remove the VM do you get the same consistent performance for your workload I think this is probably a next-year issue I think that is my honest opinion that people are definitely looking into it but I think first steps first right to build it there and then go beyond that afterwards we have those OpenStack edges and we do hardlings beyond it I wanted to make sure that we have time to answer any questions that the audience happens to have so we have a microphone here if you have questions to ask of the panel by all means we are happy to take them any takers all right we will keep thinking and I have no shortage of questions here one of the challenges we faced in OpenStack broadly is just simply getting a large enough volume of qualified people who understand the process in order to keep things moving at the speed at which they ought to be able to go does quantum and networking and capabilities that are there give us some tools to be able to solve some of these problems in networking are there ways in which we can approach that or is this something that we need to tackle in a new way is this going to be a challenge an even greater challenge in networking to get the right people into the right spots definitely this is new for networking people I have been a networking guy for a while in the last couple of years I have been installing OpenStack myself it is definitely it is an experience many of the networking people in the overall traditional networking people are not familiar with it there should be a communication of skillset learning of skillset from both sides if you look at the server or virtualization admins they are not used to all the L3 gateways, L2 gateways VPNs, internet they don't deal with those before now they have to learn that in the same way I don't think there is a lot of knowledge in the networking side of the world I think there has to be a bridge that happens everybody needs to start learning Python inevitably, yeah may network admins out there who know Python maybe a few of you I think maybe we went off on a slight tangent or maybe I will with respect to that but definitely there is a big gap between the traditional networking folks and the DevOps folks and I am actually neither one in this space but as we architect a complete system we need to build that knowledge we need to be able to build a physical network we need to be able to deal with these issues in the virtual networking plane as well maybe by opening this up to the Fat Boys coding that will actually help to get people to learn more about the network that they may have assumed is under the total control of some team far away in a dark basement yeah actually maybe we will take an audience question I am glad to hear that you guys have actually appreciated that we tried to get the abstractions right the reason why it is quantum it doesn't have to do with entanglement we would like to do that someday to get over that pesky speed of light issue it is not in Havana we haven't figured that part out yet or with the right abstraction for entanglement really would be action at a distance but we started very simple and we tried to then add a number of steps add incremental functionality and what I wanted to find out from the panelists here has that worked for you we tried to really separate out what a developer sees which is a very simplified abstract view of networking they are not going to understand different networking protocols nor should they have to but I want to make sure that we are also as we are bringing up things and I agree with you later interact with this how do we start to set up some of these things we have mechanisms for having extensions but I really wanted to find out from you guys is a quantum process we are at the open stack conference and a lot of this is us trying to get together to map out this next release and everything and if there is any suggestions you might have that how we either continue as we have been doing if that has been working for you or suggestions for how we might change the introduction we have meta plugins is that working for you we will get to that next step we have to be able to start involving the operators and the system providers I guess I will go first from our perspective we started out as outsiders from the quantum perspective and it took us some time to kind of find their way in we wrote a plugin some folks joined our team there was a little bit of a warming process that happened but once it did we found it to be a great group to work with kind of a process management perspective I do think it is working well it kind of takes the traditional warming period of getting to know everyone and earning your stripes in terms of the work being done I also agree that I actually think it is the right size byte pieces that are being taken on so everyone realizes services are very important load balancer was taken on the prototype and people are looking at firewall now and I think that will probably accelerate once you get a couple sorted out it will be easier to take five at a clip so far the work coming out of quantum I think is at the right level of distraction and that is a hard problem so I have been pretty impressed with the way it is being done I think the process itself is fine I think with respect to these operator features that we are discussing my fear is that there is not that much incentive for vendors like us and others to really work on that to make it abstract is my feeling so what I have noticed actually is that even in our own software we have tended to say yeah we can propose a blueprint for a quantum extension but we can just write this python script that calls our API and does the same thing it is horrible and lazy because it does not involve authentication, authorization aspects that quantum should front for the tenant it does not matter it is all trusted on the provider side I guess there is just not that much incentive for us to be a good citizen in that sense so we should probably involve the operators and be more deliberate about that discussion make it a priority in order to get some standardized things out like L3 gateway, L2 gateway and things of that sort which have been languishing a bit I mean I think one of the other secondary questions to that particularly from the start ups we are getting together as an industry to solve this which is pretty unusual and do you find that that is actually going to be turning out to be a business benefit to you, to be able to join this larger kind of organization to drive these things forward rather than battling it out individually I think I think it should be because the top question that everybody asks is how do you differentiate from SDN start up X rather than what benefit does this provide to me as a customer so we should probably do something more to educate the market about the overall benefit of what we are doing here I guess we still have some fundamental disagreements so it will take some time let me answer two different questions so for the first questions we are also operators so we are going to a proposed blueprint that's helping monitoring stuff we'll help monitor the diagnosis of the failure so the second point is what's the second point well whether it's benefitting you as a company yeah the largest point is that we can share the use case so then we can get many companies support on the services thank you well we're actually just over our limit is it a really quick question what it says come on up follow up with the panel afterwards I'd like to get a quick round of applause for the panel thank you all for coming out